mirror of
https://github.com/postgres/postgres.git
synced 2025-07-07 00:36:50 +03:00
Remove tabs after spaces in C comments
This was not changed in HEAD, but will be done later as part of a pgindent run. Future pgindent runs will also do this. Report by Tom Lane Backpatch through all supported branches, but not HEAD
This commit is contained in:
@ -82,11 +82,11 @@ geqo_eval(PlannerInfo *root, Gene *tour, int num_gene)
|
||||
* not already contain some entries. The newly added entries will be
|
||||
* recycled by the MemoryContextDelete below, so we must ensure that the
|
||||
* list is restored to its former state before exiting. We can do this by
|
||||
* truncating the list to its original length. NOTE this assumes that any
|
||||
* truncating the list to its original length. NOTE this assumes that any
|
||||
* added entries are appended at the end!
|
||||
*
|
||||
* We also must take care not to mess up the outer join_rel_hash, if there
|
||||
* is one. We can do this by just temporarily setting the link to NULL.
|
||||
* is one. We can do this by just temporarily setting the link to NULL.
|
||||
* (If we are dealing with enough join rels, which we very likely are, a
|
||||
* new hash table will get built and used locally.)
|
||||
*
|
||||
@ -217,7 +217,7 @@ gimme_tree(PlannerInfo *root, Gene *tour, int num_gene)
|
||||
* Merge a "clump" into the list of existing clumps for gimme_tree.
|
||||
*
|
||||
* We try to merge the clump into some existing clump, and repeat if
|
||||
* successful. When no more merging is possible, insert the clump
|
||||
* successful. When no more merging is possible, insert the clump
|
||||
* into the list, preserving the list ordering rule (namely, that
|
||||
* clumps of larger size appear earlier).
|
||||
*
|
||||
@ -268,7 +268,7 @@ merge_clump(PlannerInfo *root, List *clumps, Clump *new_clump, bool force)
|
||||
|
||||
/*
|
||||
* Recursively try to merge the enlarged old_clump with
|
||||
* others. When no further merge is possible, we'll reinsert
|
||||
* others. When no further merge is possible, we'll reinsert
|
||||
* it into the list.
|
||||
*/
|
||||
return merge_clump(root, clumps, old_clump, force);
|
||||
@ -279,7 +279,7 @@ merge_clump(PlannerInfo *root, List *clumps, Clump *new_clump, bool force)
|
||||
|
||||
/*
|
||||
* No merging is possible, so add new_clump as an independent clump, in
|
||||
* proper order according to size. We can be fast for the common case
|
||||
* proper order according to size. We can be fast for the common case
|
||||
* where it has size 1 --- it should always go at the end.
|
||||
*/
|
||||
if (clumps == NIL || new_clump->size == 1)
|
||||
|
@ -435,7 +435,7 @@ set_foreign_pathlist(PlannerInfo *root, RelOptInfo *rel, RangeTblEntry *rte)
|
||||
* set_append_rel_size
|
||||
* Set size estimates for an "append relation"
|
||||
*
|
||||
* The passed-in rel and RTE represent the entire append relation. The
|
||||
* The passed-in rel and RTE represent the entire append relation. The
|
||||
* relation's contents are computed by appending together the output of
|
||||
* the individual member relations. Note that in the inheritance case,
|
||||
* the first member relation is actually the same table as is mentioned in
|
||||
@ -499,7 +499,7 @@ set_append_rel_size(PlannerInfo *root, RelOptInfo *rel,
|
||||
|
||||
/*
|
||||
* We have to copy the parent's targetlist and quals to the child,
|
||||
* with appropriate substitution of variables. However, only the
|
||||
* with appropriate substitution of variables. However, only the
|
||||
* baserestrictinfo quals are needed before we can check for
|
||||
* constraint exclusion; so do that first and then check to see if we
|
||||
* can disregard this child.
|
||||
@ -563,7 +563,7 @@ set_append_rel_size(PlannerInfo *root, RelOptInfo *rel,
|
||||
|
||||
/*
|
||||
* We have to make child entries in the EquivalenceClass data
|
||||
* structures as well. This is needed either if the parent
|
||||
* structures as well. This is needed either if the parent
|
||||
* participates in some eclass joins (because we will want to consider
|
||||
* inner-indexscan joins on the individual children) or if the parent
|
||||
* has useful pathkeys (because we should try to build MergeAppend
|
||||
@ -604,7 +604,7 @@ set_append_rel_size(PlannerInfo *root, RelOptInfo *rel,
|
||||
|
||||
/*
|
||||
* Accumulate per-column estimates too. We need not do anything
|
||||
* for PlaceHolderVars in the parent list. If child expression
|
||||
* for PlaceHolderVars in the parent list. If child expression
|
||||
* isn't a Var, or we didn't record a width estimate for it, we
|
||||
* have to fall back on a datatype-based estimate.
|
||||
*
|
||||
@ -680,7 +680,7 @@ set_append_rel_pathlist(PlannerInfo *root, RelOptInfo *rel,
|
||||
|
||||
/*
|
||||
* Generate access paths for each member relation, and remember the
|
||||
* cheapest path for each one. Also, identify all pathkeys (orderings)
|
||||
* cheapest path for each one. Also, identify all pathkeys (orderings)
|
||||
* and parameterizations (required_outer sets) available for the member
|
||||
* relations.
|
||||
*/
|
||||
@ -730,7 +730,7 @@ set_append_rel_pathlist(PlannerInfo *root, RelOptInfo *rel,
|
||||
|
||||
/*
|
||||
* Collect lists of all the available path orderings and
|
||||
* parameterizations for all the children. We use these as a
|
||||
* parameterizations for all the children. We use these as a
|
||||
* heuristic to indicate which sort orderings and parameterizations we
|
||||
* should build Append and MergeAppend paths for.
|
||||
*/
|
||||
@ -816,7 +816,7 @@ set_append_rel_pathlist(PlannerInfo *root, RelOptInfo *rel,
|
||||
* so that not that many cases actually get considered here.)
|
||||
*
|
||||
* The Append node itself cannot enforce quals, so all qual checking must
|
||||
* be done in the child paths. This means that to have a parameterized
|
||||
* be done in the child paths. This means that to have a parameterized
|
||||
* Append path, we must have the exact same parameterization for each
|
||||
* child path; otherwise some children might be failing to check the
|
||||
* moved-down quals. To make them match up, we can try to increase the
|
||||
@ -987,7 +987,7 @@ get_cheapest_parameterized_child_path(PlannerInfo *root, RelOptInfo *rel,
|
||||
* joinquals to be checked within the path's scan. However, some existing
|
||||
* paths might check the available joinquals already while others don't;
|
||||
* therefore, it's not clear which existing path will be cheapest after
|
||||
* reparameterization. We have to go through them all and find out.
|
||||
* reparameterization. We have to go through them all and find out.
|
||||
*/
|
||||
cheapest = NULL;
|
||||
foreach(lc, rel->pathlist)
|
||||
@ -1101,7 +1101,7 @@ has_multiple_baserels(PlannerInfo *root)
|
||||
*
|
||||
* We don't currently support generating parameterized paths for subqueries
|
||||
* by pushing join clauses down into them; it seems too expensive to re-plan
|
||||
* the subquery multiple times to consider different alternatives. So the
|
||||
* the subquery multiple times to consider different alternatives. So the
|
||||
* subquery will have exactly one path. (The path will be parameterized
|
||||
* if the subquery contains LATERAL references, otherwise not.) Since there's
|
||||
* no freedom of action here, there's no need for a separate set_subquery_size
|
||||
@ -1510,7 +1510,7 @@ make_rel_from_joinlist(PlannerInfo *root, List *joinlist)
|
||||
* independent jointree items in the query. This is > 1.
|
||||
*
|
||||
* 'initial_rels' is a list of RelOptInfo nodes for each independent
|
||||
* jointree item. These are the components to be joined together.
|
||||
* jointree item. These are the components to be joined together.
|
||||
* Note that levels_needed == list_length(initial_rels).
|
||||
*
|
||||
* Returns the final level of join relations, i.e., the relation that is
|
||||
@ -1526,7 +1526,7 @@ make_rel_from_joinlist(PlannerInfo *root, List *joinlist)
|
||||
* needed for these paths need have been instantiated.
|
||||
*
|
||||
* Note to plugin authors: the functions invoked during standard_join_search()
|
||||
* modify root->join_rel_list and root->join_rel_hash. If you want to do more
|
||||
* modify root->join_rel_list and root->join_rel_hash. If you want to do more
|
||||
* than one join-order search, you'll probably need to save and restore the
|
||||
* original states of those data structures. See geqo_eval() for an example.
|
||||
*/
|
||||
@ -1625,7 +1625,7 @@ standard_join_search(PlannerInfo *root, int levels_needed, List *initial_rels)
|
||||
* column k is found to be unsafe to reference, we set unsafeColumns[k] to
|
||||
* TRUE, but we don't reject the subquery overall since column k might
|
||||
* not be referenced by some/all quals. The unsafeColumns[] array will be
|
||||
* consulted later by qual_is_pushdown_safe(). It's better to do it this
|
||||
* consulted later by qual_is_pushdown_safe(). It's better to do it this
|
||||
* way than to make the checks directly in qual_is_pushdown_safe(), because
|
||||
* when the subquery involves set operations we have to check the output
|
||||
* expressions in each arm of the set op.
|
||||
@ -1718,7 +1718,7 @@ recurse_pushdown_safe(Node *setOp, Query *topquery,
|
||||
* check_output_expressions - check subquery's output expressions for safety
|
||||
*
|
||||
* There are several cases in which it's unsafe to push down an upper-level
|
||||
* qual if it references a particular output column of a subquery. We check
|
||||
* qual if it references a particular output column of a subquery. We check
|
||||
* each output column of the subquery and set unsafeColumns[k] to TRUE if
|
||||
* that column is unsafe for a pushed-down qual to reference. The conditions
|
||||
* checked here are:
|
||||
@ -1736,7 +1736,7 @@ recurse_pushdown_safe(Node *setOp, Query *topquery,
|
||||
* of rows returned. (This condition is vacuous for DISTINCT, because then
|
||||
* there are no non-DISTINCT output columns, so we needn't check. But note
|
||||
* we are assuming that the qual can't distinguish values that the DISTINCT
|
||||
* operator sees as equal. This is a bit shaky but we have no way to test
|
||||
* operator sees as equal. This is a bit shaky but we have no way to test
|
||||
* for the case, and it's unlikely enough that we shouldn't refuse the
|
||||
* optimization just because it could theoretically happen.)
|
||||
*/
|
||||
@ -1853,7 +1853,7 @@ qual_is_pushdown_safe(Query *subquery, Index rti, Node *qual,
|
||||
|
||||
/*
|
||||
* It would be unsafe to push down window function calls, but at least for
|
||||
* the moment we could never see any in a qual anyhow. (The same applies
|
||||
* the moment we could never see any in a qual anyhow. (The same applies
|
||||
* to aggregates, which we check for in pull_var_clause below.)
|
||||
*/
|
||||
Assert(!contain_window_function(qual));
|
||||
|
@ -58,7 +58,7 @@ static void addRangeClause(RangeQueryClause **rqlist, Node *clause,
|
||||
* See clause_selectivity() for the meaning of the additional parameters.
|
||||
*
|
||||
* Our basic approach is to take the product of the selectivities of the
|
||||
* subclauses. However, that's only right if the subclauses have independent
|
||||
* subclauses. However, that's only right if the subclauses have independent
|
||||
* probabilities, and in reality they are often NOT independent. So,
|
||||
* we want to be smarter where we can.
|
||||
|
||||
@ -75,12 +75,12 @@ static void addRangeClause(RangeQueryClause **rqlist, Node *clause,
|
||||
* see that hisel is the fraction of the range below the high bound, while
|
||||
* losel is the fraction above the low bound; so hisel can be interpreted
|
||||
* directly as a 0..1 value but we need to convert losel to 1-losel before
|
||||
* interpreting it as a value. Then the available range is 1-losel to hisel.
|
||||
* interpreting it as a value. Then the available range is 1-losel to hisel.
|
||||
* However, this calculation double-excludes nulls, so really we need
|
||||
* hisel + losel + null_frac - 1.)
|
||||
*
|
||||
* If either selectivity is exactly DEFAULT_INEQ_SEL, we forget this equation
|
||||
* and instead use DEFAULT_RANGE_INEQ_SEL. The same applies if the equation
|
||||
* and instead use DEFAULT_RANGE_INEQ_SEL. The same applies if the equation
|
||||
* yields an impossible (negative) result.
|
||||
*
|
||||
* A free side-effect is that we can recognize redundant inequalities such
|
||||
@ -174,7 +174,7 @@ clauselist_selectivity(PlannerInfo *root,
|
||||
{
|
||||
/*
|
||||
* If it's not a "<" or ">" operator, just merge the
|
||||
* selectivity in generically. But if it's the right oprrest,
|
||||
* selectivity in generically. But if it's the right oprrest,
|
||||
* add the clause to rqlist for later processing.
|
||||
*/
|
||||
switch (get_oprrest(expr->opno))
|
||||
@ -459,14 +459,14 @@ treat_as_join_clause(Node *clause, RestrictInfo *rinfo,
|
||||
* nestloop join's inner relation --- varRelid should then be the ID of the
|
||||
* inner relation.
|
||||
*
|
||||
* When varRelid is 0, all variables are treated as variables. This
|
||||
* When varRelid is 0, all variables are treated as variables. This
|
||||
* is appropriate for ordinary join clauses and restriction clauses.
|
||||
*
|
||||
* jointype is the join type, if the clause is a join clause. Pass JOIN_INNER
|
||||
* if the clause isn't a join clause.
|
||||
*
|
||||
* sjinfo is NULL for a non-join clause, otherwise it provides additional
|
||||
* context information about the join being performed. There are some
|
||||
* context information about the join being performed. There are some
|
||||
* special cases:
|
||||
* 1. For a special (not INNER) join, sjinfo is always a member of
|
||||
* root->join_info_list.
|
||||
@ -501,7 +501,7 @@ clause_selectivity(PlannerInfo *root,
|
||||
/*
|
||||
* If the clause is marked pseudoconstant, then it will be used as a
|
||||
* gating qual and should not affect selectivity estimates; hence
|
||||
* return 1.0. The only exception is that a constant FALSE may be
|
||||
* return 1.0. The only exception is that a constant FALSE may be
|
||||
* taken as having selectivity 0.0, since it will surely mean no rows
|
||||
* out of the plan. This case is simple enough that we need not
|
||||
* bother caching the result.
|
||||
@ -520,11 +520,11 @@ clause_selectivity(PlannerInfo *root,
|
||||
|
||||
/*
|
||||
* If possible, cache the result of the selectivity calculation for
|
||||
* the clause. We can cache if varRelid is zero or the clause
|
||||
* the clause. We can cache if varRelid is zero or the clause
|
||||
* contains only vars of that relid --- otherwise varRelid will affect
|
||||
* the result, so mustn't cache. Outer join quals might be examined
|
||||
* with either their join's actual jointype or JOIN_INNER, so we need
|
||||
* two cache variables to remember both cases. Note: we assume the
|
||||
* two cache variables to remember both cases. Note: we assume the
|
||||
* result won't change if we are switching the input relations or
|
||||
* considering a unique-ified case, so we only need one cache variable
|
||||
* for all non-JOIN_INNER cases.
|
||||
@ -685,7 +685,7 @@ clause_selectivity(PlannerInfo *root,
|
||||
/*
|
||||
* This is not an operator, so we guess at the selectivity. THIS IS A
|
||||
* HACK TO GET V4 OUT THE DOOR. FUNCS SHOULD BE ABLE TO HAVE
|
||||
* SELECTIVITIES THEMSELVES. -- JMH 7/9/92
|
||||
* SELECTIVITIES THEMSELVES. -- JMH 7/9/92
|
||||
*/
|
||||
s1 = (Selectivity) 0.3333333;
|
||||
}
|
||||
|
@ -24,7 +24,7 @@
|
||||
*
|
||||
* Obviously, taking constants for these values is an oversimplification,
|
||||
* but it's tough enough to get any useful estimates even at this level of
|
||||
* detail. Note that all of these parameters are user-settable, in case
|
||||
* detail. Note that all of these parameters are user-settable, in case
|
||||
* the default values are drastically off for a particular platform.
|
||||
*
|
||||
* seq_page_cost and random_page_cost can also be overridden for an individual
|
||||
@ -491,7 +491,7 @@ cost_index(IndexPath *path, PlannerInfo *root, double loop_count)
|
||||
* computed for us by query_planner.
|
||||
*
|
||||
* Caller is expected to have ensured that tuples_fetched is greater than zero
|
||||
* and rounded to integer (see clamp_row_est). The result will likewise be
|
||||
* and rounded to integer (see clamp_row_est). The result will likewise be
|
||||
* greater than zero and integral.
|
||||
*/
|
||||
double
|
||||
@ -692,7 +692,7 @@ cost_bitmap_heap_scan(Path *path, PlannerInfo *root, RelOptInfo *baserel,
|
||||
/*
|
||||
* For small numbers of pages we should charge spc_random_page_cost
|
||||
* apiece, while if nearly all the table's pages are being read, it's more
|
||||
* appropriate to charge spc_seq_page_cost apiece. The effect is
|
||||
* appropriate to charge spc_seq_page_cost apiece. The effect is
|
||||
* nonlinear, too. For lack of a better idea, interpolate like this to
|
||||
* determine the cost per page.
|
||||
*/
|
||||
@ -767,7 +767,7 @@ cost_bitmap_tree_node(Path *path, Cost *cost, Selectivity *selec)
|
||||
* Estimate the cost of a BitmapAnd node
|
||||
*
|
||||
* Note that this considers only the costs of index scanning and bitmap
|
||||
* creation, not the eventual heap access. In that sense the object isn't
|
||||
* creation, not the eventual heap access. In that sense the object isn't
|
||||
* truly a Path, but it has enough path-like properties (costs in particular)
|
||||
* to warrant treating it as one. We don't bother to set the path rows field,
|
||||
* however.
|
||||
@ -826,7 +826,7 @@ cost_bitmap_or_node(BitmapOrPath *path, PlannerInfo *root)
|
||||
/*
|
||||
* We estimate OR selectivity on the assumption that the inputs are
|
||||
* non-overlapping, since that's often the case in "x IN (list)" type
|
||||
* situations. Of course, we clamp to 1.0 at the end.
|
||||
* situations. Of course, we clamp to 1.0 at the end.
|
||||
*
|
||||
* The runtime cost of the BitmapOr itself is estimated at 100x
|
||||
* cpu_operator_cost for each tbm_union needed. Probably too small,
|
||||
@ -915,7 +915,7 @@ cost_tidscan(Path *path, PlannerInfo *root,
|
||||
|
||||
/*
|
||||
* We must force TID scan for WHERE CURRENT OF, because only nodeTidscan.c
|
||||
* understands how to do it correctly. Therefore, honor enable_tidscan
|
||||
* understands how to do it correctly. Therefore, honor enable_tidscan
|
||||
* only when CURRENT OF isn't present. Also note that cost_qual_eval
|
||||
* counts a CurrentOfExpr as having startup cost disable_cost, which we
|
||||
* subtract off here; that's to prevent other plan types such as seqscan
|
||||
@ -1034,7 +1034,7 @@ cost_functionscan(Path *path, PlannerInfo *root,
|
||||
*
|
||||
* Currently, nodeFunctionscan.c always executes the function to
|
||||
* completion before returning any rows, and caches the results in a
|
||||
* tuplestore. So the function eval cost is all startup cost, and per-row
|
||||
* tuplestore. So the function eval cost is all startup cost, and per-row
|
||||
* costs are minimal.
|
||||
*
|
||||
* XXX in principle we ought to charge tuplestore spill costs if the
|
||||
@ -1106,7 +1106,7 @@ cost_valuesscan(Path *path, PlannerInfo *root,
|
||||
*
|
||||
* Note: this is used for both self-reference and regular CTEs; the
|
||||
* possible cost differences are below the threshold of what we could
|
||||
* estimate accurately anyway. Note that the costs of evaluating the
|
||||
* estimate accurately anyway. Note that the costs of evaluating the
|
||||
* referenced CTE query are added into the final plan as initplan costs,
|
||||
* and should NOT be counted here.
|
||||
*/
|
||||
@ -1200,7 +1200,7 @@ cost_recursive_union(Plan *runion, Plan *nrterm, Plan *rterm)
|
||||
* If the total volume exceeds sort_mem, we switch to a tape-style merge
|
||||
* algorithm. There will still be about t*log2(t) tuple comparisons in
|
||||
* total, but we will also need to write and read each tuple once per
|
||||
* merge pass. We expect about ceil(logM(r)) merge passes where r is the
|
||||
* merge pass. We expect about ceil(logM(r)) merge passes where r is the
|
||||
* number of initial runs formed and M is the merge order used by tuplesort.c.
|
||||
* Since the average initial run should be about twice sort_mem, we have
|
||||
* disk traffic = 2 * relsize * ceil(logM(p / (2*sort_mem)))
|
||||
@ -1214,7 +1214,7 @@ cost_recursive_union(Plan *runion, Plan *nrterm, Plan *rterm)
|
||||
* accesses (XXX can't we refine that guess?)
|
||||
*
|
||||
* By default, we charge two operator evals per tuple comparison, which should
|
||||
* be in the right ballpark in most cases. The caller can tweak this by
|
||||
* be in the right ballpark in most cases. The caller can tweak this by
|
||||
* specifying nonzero comparison_cost; typically that's used for any extra
|
||||
* work that has to be done to prepare the inputs to the comparison operators.
|
||||
*
|
||||
@ -1338,7 +1338,7 @@ cost_sort(Path *path, PlannerInfo *root,
|
||||
* Determines and returns the cost of a MergeAppend node.
|
||||
*
|
||||
* MergeAppend merges several pre-sorted input streams, using a heap that
|
||||
* at any given instant holds the next tuple from each stream. If there
|
||||
* at any given instant holds the next tuple from each stream. If there
|
||||
* are N streams, we need about N*log2(N) tuple comparisons to construct
|
||||
* the heap at startup, and then for each output tuple, about log2(N)
|
||||
* comparisons to delete the top heap entry and another log2(N) comparisons
|
||||
@ -1497,7 +1497,7 @@ cost_agg(Path *path, PlannerInfo *root,
|
||||
* group otherwise. We charge cpu_tuple_cost for each output tuple.
|
||||
*
|
||||
* Note: in this cost model, AGG_SORTED and AGG_HASHED have exactly the
|
||||
* same total CPU cost, but AGG_SORTED has lower startup cost. If the
|
||||
* same total CPU cost, but AGG_SORTED has lower startup cost. If the
|
||||
* input path is already sorted appropriately, AGG_SORTED should be
|
||||
* preferred (since it has no risk of memory overflow). This will happen
|
||||
* as long as the computed total costs are indeed exactly equal --- but if
|
||||
@ -2097,10 +2097,10 @@ initial_cost_mergejoin(PlannerInfo *root, JoinCostWorkspace *workspace,
|
||||
* Unlike other costsize functions, this routine makes one actual decision:
|
||||
* whether we should materialize the inner path. We do that either because
|
||||
* the inner path can't support mark/restore, or because it's cheaper to
|
||||
* use an interposed Material node to handle mark/restore. When the decision
|
||||
* use an interposed Material node to handle mark/restore. When the decision
|
||||
* is cost-based it would be logically cleaner to build and cost two separate
|
||||
* paths with and without that flag set; but that would require repeating most
|
||||
* of the cost calculations, which are not all that cheap. Since the choice
|
||||
* of the cost calculations, which are not all that cheap. Since the choice
|
||||
* will not affect output pathkeys or startup cost, only total cost, there is
|
||||
* no possibility of wanting to keep both paths. So it seems best to make
|
||||
* the decision here and record it in the path's materialize_inner field.
|
||||
@ -2164,7 +2164,7 @@ final_cost_mergejoin(PlannerInfo *root, MergePath *path,
|
||||
qp_qual_cost.per_tuple -= merge_qual_cost.per_tuple;
|
||||
|
||||
/*
|
||||
* Get approx # tuples passing the mergequals. We use approx_tuple_count
|
||||
* Get approx # tuples passing the mergequals. We use approx_tuple_count
|
||||
* here because we need an estimate done with JOIN_INNER semantics.
|
||||
*/
|
||||
mergejointuples = approx_tuple_count(root, &path->jpath, mergeclauses);
|
||||
@ -2178,7 +2178,7 @@ final_cost_mergejoin(PlannerInfo *root, MergePath *path,
|
||||
* estimated approximately as size of merge join output minus size of
|
||||
* inner relation. Assume that the distinct key values are 1, 2, ..., and
|
||||
* denote the number of values of each key in the outer relation as m1,
|
||||
* m2, ...; in the inner relation, n1, n2, ... Then we have
|
||||
* m2, ...; in the inner relation, n1, n2, ... Then we have
|
||||
*
|
||||
* size of join = m1 * n1 + m2 * n2 + ...
|
||||
*
|
||||
@ -2189,7 +2189,7 @@ final_cost_mergejoin(PlannerInfo *root, MergePath *path,
|
||||
* This equation works correctly for outer tuples having no inner match
|
||||
* (nk = 0), but not for inner tuples having no outer match (mk = 0); we
|
||||
* are effectively subtracting those from the number of rescanned tuples,
|
||||
* when we should not. Can we do better without expensive selectivity
|
||||
* when we should not. Can we do better without expensive selectivity
|
||||
* computations?
|
||||
*
|
||||
* The whole issue is moot if we are working from a unique-ified outer
|
||||
@ -2209,7 +2209,7 @@ final_cost_mergejoin(PlannerInfo *root, MergePath *path,
|
||||
|
||||
/*
|
||||
* Decide whether we want to materialize the inner input to shield it from
|
||||
* mark/restore and performing re-fetches. Our cost model for regular
|
||||
* mark/restore and performing re-fetches. Our cost model for regular
|
||||
* re-fetches is that a re-fetch costs the same as an original fetch,
|
||||
* which is probably an overestimate; but on the other hand we ignore the
|
||||
* bookkeeping costs of mark/restore. Not clear if it's worth developing
|
||||
@ -2302,7 +2302,7 @@ final_cost_mergejoin(PlannerInfo *root, MergePath *path,
|
||||
/*
|
||||
* For each tuple that gets through the mergejoin proper, we charge
|
||||
* cpu_tuple_cost plus the cost of evaluating additional restriction
|
||||
* clauses that are to be applied at the join. (This is pessimistic since
|
||||
* clauses that are to be applied at the join. (This is pessimistic since
|
||||
* not all of the quals may get evaluated at each tuple.)
|
||||
*
|
||||
* Note: we could adjust for SEMI/ANTI joins skipping some qual
|
||||
@ -2454,7 +2454,7 @@ initial_cost_hashjoin(PlannerInfo *root, JoinCostWorkspace *workspace,
|
||||
* If inner relation is too big then we will need to "batch" the join,
|
||||
* which implies writing and reading most of the tuples to disk an extra
|
||||
* time. Charge seq_page_cost per page, since the I/O should be nice and
|
||||
* sequential. Writing the inner rel counts as startup cost, all the rest
|
||||
* sequential. Writing the inner rel counts as startup cost, all the rest
|
||||
* as run cost.
|
||||
*/
|
||||
if (numbatches > 1)
|
||||
@ -2685,7 +2685,7 @@ final_cost_hashjoin(PlannerInfo *root, HashPath *path,
|
||||
/*
|
||||
* For each tuple that gets through the hashjoin proper, we charge
|
||||
* cpu_tuple_cost plus the cost of evaluating additional restriction
|
||||
* clauses that are to be applied at the join. (This is pessimistic since
|
||||
* clauses that are to be applied at the join. (This is pessimistic since
|
||||
* not all of the quals may get evaluated at each tuple.)
|
||||
*/
|
||||
startup_cost += qp_qual_cost.startup;
|
||||
@ -2738,7 +2738,7 @@ cost_subplan(PlannerInfo *root, SubPlan *subplan, Plan *plan)
|
||||
{
|
||||
/*
|
||||
* Otherwise we will be rescanning the subplan output on each
|
||||
* evaluation. We need to estimate how much of the output we will
|
||||
* evaluation. We need to estimate how much of the output we will
|
||||
* actually need to scan. NOTE: this logic should agree with the
|
||||
* tuple_fraction estimates used by make_subplan() in
|
||||
* plan/subselect.c.
|
||||
@ -2786,10 +2786,10 @@ cost_subplan(PlannerInfo *root, SubPlan *subplan, Plan *plan)
|
||||
/*
|
||||
* cost_rescan
|
||||
* Given a finished Path, estimate the costs of rescanning it after
|
||||
* having done so the first time. For some Path types a rescan is
|
||||
* having done so the first time. For some Path types a rescan is
|
||||
* cheaper than an original scan (if no parameters change), and this
|
||||
* function embodies knowledge about that. The default is to return
|
||||
* the same costs stored in the Path. (Note that the cost estimates
|
||||
* the same costs stored in the Path. (Note that the cost estimates
|
||||
* actually stored in Paths are always for first scans.)
|
||||
*
|
||||
* This function is not currently intended to model effects such as rescans
|
||||
@ -2830,7 +2830,7 @@ cost_rescan(PlannerInfo *root, Path *path,
|
||||
{
|
||||
/*
|
||||
* These plan types materialize their final result in a
|
||||
* tuplestore or tuplesort object. So the rescan cost is only
|
||||
* tuplestore or tuplesort object. So the rescan cost is only
|
||||
* cpu_tuple_cost per tuple, unless the result is large enough
|
||||
* to spill to disk.
|
||||
*/
|
||||
@ -2855,8 +2855,8 @@ cost_rescan(PlannerInfo *root, Path *path,
|
||||
{
|
||||
/*
|
||||
* These plan types not only materialize their results, but do
|
||||
* not implement qual filtering or projection. So they are
|
||||
* even cheaper to rescan than the ones above. We charge only
|
||||
* not implement qual filtering or projection. So they are
|
||||
* even cheaper to rescan than the ones above. We charge only
|
||||
* cpu_operator_cost per tuple. (Note: keep that in sync with
|
||||
* the run_cost charge in cost_sort, and also see comments in
|
||||
* cost_material before you change it.)
|
||||
@ -2997,7 +2997,7 @@ cost_qual_eval_walker(Node *node, cost_qual_eval_context *context)
|
||||
* evaluation of AND/OR? Probably *not*, because that would make the
|
||||
* results depend on the clause ordering, and we are not in any position
|
||||
* to expect that the current ordering of the clauses is the one that's
|
||||
* going to end up being used. The above per-RestrictInfo caching would
|
||||
* going to end up being used. The above per-RestrictInfo caching would
|
||||
* not mix well with trying to re-order clauses anyway.
|
||||
*
|
||||
* Another issue that is entirely ignored here is that if a set-returning
|
||||
@ -3119,7 +3119,7 @@ cost_qual_eval_walker(Node *node, cost_qual_eval_context *context)
|
||||
else if (IsA(node, AlternativeSubPlan))
|
||||
{
|
||||
/*
|
||||
* Arbitrarily use the first alternative plan for costing. (We should
|
||||
* Arbitrarily use the first alternative plan for costing. (We should
|
||||
* certainly only include one alternative, and we don't yet have
|
||||
* enough information to know which one the executor is most likely to
|
||||
* use.)
|
||||
@ -3263,13 +3263,13 @@ compute_semi_anti_join_factors(PlannerInfo *root,
|
||||
/*
|
||||
* jselec can be interpreted as the fraction of outer-rel rows that have
|
||||
* any matches (this is true for both SEMI and ANTI cases). And nselec is
|
||||
* the fraction of the Cartesian product that matches. So, the average
|
||||
* the fraction of the Cartesian product that matches. So, the average
|
||||
* number of matches for each outer-rel row that has at least one match is
|
||||
* nselec * inner_rows / jselec.
|
||||
*
|
||||
* Note: it is correct to use the inner rel's "rows" count here, even
|
||||
* though we might later be considering a parameterized inner path with
|
||||
* fewer rows. This is because we have included all the join clauses in
|
||||
* fewer rows. This is because we have included all the join clauses in
|
||||
* the selectivity estimate.
|
||||
*/
|
||||
if (jselec > 0) /* protect against zero divide */
|
||||
@ -3597,7 +3597,7 @@ calc_joinrel_size_estimate(PlannerInfo *root,
|
||||
double nrows;
|
||||
|
||||
/*
|
||||
* Compute joinclause selectivity. Note that we are only considering
|
||||
* Compute joinclause selectivity. Note that we are only considering
|
||||
* clauses that become restriction clauses at this join level; we are not
|
||||
* double-counting them because they were not considered in estimating the
|
||||
* sizes of the component rels.
|
||||
@ -3655,7 +3655,7 @@ calc_joinrel_size_estimate(PlannerInfo *root,
|
||||
*
|
||||
* If we are doing an outer join, take that into account: the joinqual
|
||||
* selectivity has to be clamped using the knowledge that the output must
|
||||
* be at least as large as the non-nullable input. However, any
|
||||
* be at least as large as the non-nullable input. However, any
|
||||
* pushed-down quals are applied after the outer join, so their
|
||||
* selectivity applies fully.
|
||||
*
|
||||
@ -3726,7 +3726,7 @@ set_subquery_size_estimates(PlannerInfo *root, RelOptInfo *rel)
|
||||
|
||||
/*
|
||||
* Compute per-output-column width estimates by examining the subquery's
|
||||
* targetlist. For any output that is a plain Var, get the width estimate
|
||||
* targetlist. For any output that is a plain Var, get the width estimate
|
||||
* that was made while planning the subquery. Otherwise, we leave it to
|
||||
* set_rel_width to fill in a datatype-based default estimate.
|
||||
*/
|
||||
@ -3745,7 +3745,7 @@ set_subquery_size_estimates(PlannerInfo *root, RelOptInfo *rel)
|
||||
* The subquery could be an expansion of a view that's had columns
|
||||
* added to it since the current query was parsed, so that there are
|
||||
* non-junk tlist columns in it that don't correspond to any column
|
||||
* visible at our query level. Ignore such columns.
|
||||
* visible at our query level. Ignore such columns.
|
||||
*/
|
||||
if (te->resno < rel->min_attr || te->resno > rel->max_attr)
|
||||
continue;
|
||||
@ -3882,7 +3882,7 @@ set_cte_size_estimates(PlannerInfo *root, RelOptInfo *rel, Plan *cteplan)
|
||||
* of estimating baserestrictcost, so we set that, and we also set up width
|
||||
* using what will be purely datatype-driven estimates from the targetlist.
|
||||
* There is no way to do anything sane with the rows value, so we just put
|
||||
* a default estimate and hope that the wrapper can improve on it. The
|
||||
* a default estimate and hope that the wrapper can improve on it. The
|
||||
* wrapper's GetForeignRelSize function will be called momentarily.
|
||||
*
|
||||
* The rel's targetlist and restrictinfo list must have been constructed
|
||||
@ -4003,7 +4003,7 @@ set_rel_width(PlannerInfo *root, RelOptInfo *rel)
|
||||
{
|
||||
/*
|
||||
* We could be looking at an expression pulled up from a subquery,
|
||||
* or a ROW() representing a whole-row child Var, etc. Do what we
|
||||
* or a ROW() representing a whole-row child Var, etc. Do what we
|
||||
* can using the expression type information.
|
||||
*/
|
||||
int32 item_width;
|
||||
|
@ -74,7 +74,7 @@ static bool reconsider_full_join_clause(PlannerInfo *root,
|
||||
*
|
||||
* If below_outer_join is true, then the clause was found below the nullable
|
||||
* side of an outer join, so its sides might validly be both NULL rather than
|
||||
* strictly equal. We can still deduce equalities in such cases, but we take
|
||||
* strictly equal. We can still deduce equalities in such cases, but we take
|
||||
* care to mark an EquivalenceClass if it came from any such clauses. Also,
|
||||
* we have to check that both sides are either pseudo-constants or strict
|
||||
* functions of Vars, else they might not both go to NULL above the outer
|
||||
@ -141,9 +141,9 @@ process_equivalence(PlannerInfo *root, RestrictInfo *restrictinfo,
|
||||
collation);
|
||||
|
||||
/*
|
||||
* Reject clauses of the form X=X. These are not as redundant as they
|
||||
* Reject clauses of the form X=X. These are not as redundant as they
|
||||
* might seem at first glance: assuming the operator is strict, this is
|
||||
* really an expensive way to write X IS NOT NULL. So we must not risk
|
||||
* really an expensive way to write X IS NOT NULL. So we must not risk
|
||||
* just losing the clause, which would be possible if there is already a
|
||||
* single-element EquivalenceClass containing X. The case is not common
|
||||
* enough to be worth contorting the EC machinery for, so just reject the
|
||||
@ -187,14 +187,14 @@ process_equivalence(PlannerInfo *root, RestrictInfo *restrictinfo,
|
||||
* Sweep through the existing EquivalenceClasses looking for matches to
|
||||
* item1 and item2. These are the possible outcomes:
|
||||
*
|
||||
* 1. We find both in the same EC. The equivalence is already known, so
|
||||
* 1. We find both in the same EC. The equivalence is already known, so
|
||||
* there's nothing to do.
|
||||
*
|
||||
* 2. We find both in different ECs. Merge the two ECs together.
|
||||
*
|
||||
* 3. We find just one. Add the other to its EC.
|
||||
*
|
||||
* 4. We find neither. Make a new, two-entry EC.
|
||||
* 4. We find neither. Make a new, two-entry EC.
|
||||
*
|
||||
* Note: since all ECs are built through this process or the similar
|
||||
* search in get_eclass_for_sort_expr(), it's impossible that we'd match
|
||||
@ -294,7 +294,7 @@ process_equivalence(PlannerInfo *root, RestrictInfo *restrictinfo,
|
||||
|
||||
/*
|
||||
* We add ec2's items to ec1, then set ec2's ec_merged link to point
|
||||
* to ec1 and remove ec2 from the eq_classes list. We cannot simply
|
||||
* to ec1 and remove ec2 from the eq_classes list. We cannot simply
|
||||
* delete ec2 because that could leave dangling pointers in existing
|
||||
* PathKeys. We leave it behind with a link so that the merged EC can
|
||||
* be found.
|
||||
@ -406,7 +406,7 @@ process_equivalence(PlannerInfo *root, RestrictInfo *restrictinfo,
|
||||
* Also, the expression's exposed collation must match the EC's collation.
|
||||
* This is important because in comparisons like "foo < bar COLLATE baz",
|
||||
* only one of the expressions has the correct exposed collation as we receive
|
||||
* it from the parser. Forcing both of them to have it ensures that all
|
||||
* it from the parser. Forcing both of them to have it ensures that all
|
||||
* variant spellings of such a construct behave the same. Again, we can
|
||||
* stick on a RelabelType to force the right exposed collation. (It might
|
||||
* work to not label the collation at all in EC members, but this is risky
|
||||
@ -511,22 +511,22 @@ add_eq_member(EquivalenceClass *ec, Expr *expr, Relids relids,
|
||||
* single-member EquivalenceClass for it.
|
||||
*
|
||||
* expr is the expression, and nullable_relids is the set of base relids
|
||||
* that are potentially nullable below it. We actually only care about
|
||||
* that are potentially nullable below it. We actually only care about
|
||||
* the set of such relids that are used in the expression; but for caller
|
||||
* convenience, we perform that intersection step here. The caller need
|
||||
* only be sure that nullable_relids doesn't omit any nullable rels that
|
||||
* might appear in the expr.
|
||||
*
|
||||
* sortref is the SortGroupRef of the originating SortGroupClause, if any,
|
||||
* or zero if not. (It should never be zero if the expression is volatile!)
|
||||
* or zero if not. (It should never be zero if the expression is volatile!)
|
||||
*
|
||||
* If rel is not NULL, it identifies a specific relation we're considering
|
||||
* a path for, and indicates that child EC members for that relation can be
|
||||
* considered. Otherwise child members are ignored. (Note: since child EC
|
||||
* considered. Otherwise child members are ignored. (Note: since child EC
|
||||
* members aren't guaranteed unique, a non-NULL value means that there could
|
||||
* be more than one EC that matches the expression; if so it's order-dependent
|
||||
* which one you get. This is annoying but it only happens in corner cases,
|
||||
* so for now we live with just reporting the first match. See also
|
||||
* so for now we live with just reporting the first match. See also
|
||||
* generate_implied_equalities_for_column and match_pathkeys_to_index.)
|
||||
*
|
||||
* If create_it is TRUE, we'll build a new EquivalenceClass when there is no
|
||||
@ -680,7 +680,7 @@ get_eclass_for_sort_expr(PlannerInfo *root,
|
||||
*
|
||||
* When an EC contains pseudoconstants, our strategy is to generate
|
||||
* "member = const1" clauses where const1 is the first constant member, for
|
||||
* every other member (including other constants). If we are able to do this
|
||||
* every other member (including other constants). If we are able to do this
|
||||
* then we don't need any "var = var" comparisons because we've successfully
|
||||
* constrained all the vars at their points of creation. If we fail to
|
||||
* generate any of these clauses due to lack of cross-type operators, we fall
|
||||
@ -705,7 +705,7 @@ get_eclass_for_sort_expr(PlannerInfo *root,
|
||||
* "WHERE a.x = b.y AND b.y = a.z", the scheme breaks down if we cannot
|
||||
* generate "a.x = a.z" as a restriction clause for A.) In this case we mark
|
||||
* the EC "ec_broken" and fall back to regurgitating its original source
|
||||
* RestrictInfos at appropriate times. We do not try to retract any derived
|
||||
* RestrictInfos at appropriate times. We do not try to retract any derived
|
||||
* clauses already generated from the broken EC, so the resulting plan could
|
||||
* be poor due to bad selectivity estimates caused by redundant clauses. But
|
||||
* the correct solution to that is to fix the opfamilies ...
|
||||
@ -968,8 +968,8 @@ generate_base_implied_equalities_broken(PlannerInfo *root,
|
||||
* built any join RelOptInfos.
|
||||
*
|
||||
* An annoying special case for parameterized scans is that the inner rel can
|
||||
* be an appendrel child (an "other rel"). In this case we must generate
|
||||
* appropriate clauses using child EC members. add_child_rel_equivalences
|
||||
* be an appendrel child (an "other rel"). In this case we must generate
|
||||
* appropriate clauses using child EC members. add_child_rel_equivalences
|
||||
* must already have been done for the child rel.
|
||||
*
|
||||
* The results are sufficient for use in merge, hash, and plain nestloop join
|
||||
@ -983,7 +983,7 @@ generate_base_implied_equalities_broken(PlannerInfo *root,
|
||||
* we consider different join paths, we avoid generating multiple copies:
|
||||
* whenever we select a particular pair of EquivalenceMembers to join,
|
||||
* we check to see if the pair matches any original clause (in ec_sources)
|
||||
* or previously-built clause (in ec_derives). This saves memory and allows
|
||||
* or previously-built clause (in ec_derives). This saves memory and allows
|
||||
* re-use of information cached in RestrictInfos.
|
||||
*
|
||||
* join_relids should always equal bms_union(outer_relids, inner_rel->relids).
|
||||
@ -1079,7 +1079,7 @@ generate_join_implied_equalities_normal(PlannerInfo *root,
|
||||
* First, scan the EC to identify member values that are computable at the
|
||||
* outer rel, at the inner rel, or at this relation but not in either
|
||||
* input rel. The outer-rel members should already be enforced equal,
|
||||
* likewise for the inner-rel members. We'll need to create clauses to
|
||||
* likewise for the inner-rel members. We'll need to create clauses to
|
||||
* enforce that any newly computable members are all equal to each other
|
||||
* as well as to at least one input member, plus enforce at least one
|
||||
* outer-rel member equal to at least one inner-rel member.
|
||||
@ -1105,7 +1105,7 @@ generate_join_implied_equalities_normal(PlannerInfo *root,
|
||||
}
|
||||
|
||||
/*
|
||||
* First, select the joinclause if needed. We can equate any one outer
|
||||
* First, select the joinclause if needed. We can equate any one outer
|
||||
* member to any one inner member, but we have to find a datatype
|
||||
* combination for which an opfamily member operator exists. If we have
|
||||
* choices, we prefer simple Var members (possibly with RelabelType) since
|
||||
@ -1323,8 +1323,8 @@ create_join_clause(PlannerInfo *root,
|
||||
|
||||
/*
|
||||
* Search to see if we already built a RestrictInfo for this pair of
|
||||
* EquivalenceMembers. We can use either original source clauses or
|
||||
* previously-derived clauses. The check on opno is probably redundant,
|
||||
* EquivalenceMembers. We can use either original source clauses or
|
||||
* previously-derived clauses. The check on opno is probably redundant,
|
||||
* but be safe ...
|
||||
*/
|
||||
foreach(lc, ec->ec_sources)
|
||||
@ -1455,7 +1455,7 @@ create_join_clause(PlannerInfo *root,
|
||||
*
|
||||
* Outer join clauses that are marked outerjoin_delayed are special: this
|
||||
* condition means that one or both VARs might go to null due to a lower
|
||||
* outer join. We can still push a constant through the clause, but only
|
||||
* outer join. We can still push a constant through the clause, but only
|
||||
* if its operator is strict; and we *have to* throw the clause back into
|
||||
* regular joinclause processing. By keeping the strict join clause,
|
||||
* we ensure that any null-extended rows that are mistakenly generated due
|
||||
@ -1649,7 +1649,7 @@ reconsider_outer_join_clause(PlannerInfo *root, RestrictInfo *rinfo,
|
||||
|
||||
/*
|
||||
* Yes it does! Try to generate a clause INNERVAR = CONSTANT for each
|
||||
* CONSTANT in the EC. Note that we must succeed with at least one
|
||||
* CONSTANT in the EC. Note that we must succeed with at least one
|
||||
* constant before we can decide to throw away the outer-join clause.
|
||||
*/
|
||||
match = false;
|
||||
@ -2051,7 +2051,7 @@ mutate_eclass_expressions(PlannerInfo *root,
|
||||
* is a redundant list of clauses equating the table/index column to each of
|
||||
* the other-relation values it is known to be equal to. Any one of
|
||||
* these clauses can be used to create a parameterized path, and there
|
||||
* is no value in using more than one. (But it *is* worthwhile to create
|
||||
* is no value in using more than one. (But it *is* worthwhile to create
|
||||
* a separate parameterized path for each one, since that leads to different
|
||||
* join orders.)
|
||||
*
|
||||
@ -2098,12 +2098,12 @@ generate_implied_equalities_for_column(PlannerInfo *root,
|
||||
continue;
|
||||
|
||||
/*
|
||||
* Scan members, looking for a match to the target column. Note that
|
||||
* Scan members, looking for a match to the target column. Note that
|
||||
* child EC members are considered, but only when they belong to the
|
||||
* target relation. (Unlike regular members, the same expression
|
||||
* could be a child member of more than one EC. Therefore, it's
|
||||
* potentially order-dependent which EC a child relation's target
|
||||
* column gets matched to. This is annoying but it only happens in
|
||||
* column gets matched to. This is annoying but it only happens in
|
||||
* corner cases, so for now we live with just reporting the first
|
||||
* match. See also get_eclass_for_sort_expr.)
|
||||
*/
|
||||
@ -2182,7 +2182,7 @@ generate_implied_equalities_for_column(PlannerInfo *root,
|
||||
* a joinclause involving the two given relations.
|
||||
*
|
||||
* This is essentially a very cut-down version of
|
||||
* generate_join_implied_equalities(). Note it's OK to occasionally say "yes"
|
||||
* generate_join_implied_equalities(). Note it's OK to occasionally say "yes"
|
||||
* incorrectly. Hence we don't bother with details like whether the lack of a
|
||||
* cross-type operator might prevent the clause from actually being generated.
|
||||
*/
|
||||
@ -2218,7 +2218,7 @@ have_relevant_eclass_joinclause(PlannerInfo *root,
|
||||
* OK as a possibly-overoptimistic heuristic.
|
||||
*
|
||||
* We don't test ec_has_const either, even though a const eclass won't
|
||||
* generate real join clauses. This is because if we had "WHERE a.x =
|
||||
* generate real join clauses. This is because if we had "WHERE a.x =
|
||||
* b.y and a.x = 42", it is worth considering a join between a and b,
|
||||
* since the join result is likely to be small even though it'll end
|
||||
* up being an unqualified nestloop.
|
||||
@ -2275,7 +2275,7 @@ has_relevant_eclass_joinclause(PlannerInfo *root, RelOptInfo *rel1)
|
||||
* against the specified relation.
|
||||
*
|
||||
* This is just a heuristic test and doesn't have to be exact; it's better
|
||||
* to say "yes" incorrectly than "no". Hence we don't bother with details
|
||||
* to say "yes" incorrectly than "no". Hence we don't bother with details
|
||||
* like whether the lack of a cross-type operator might prevent the clause
|
||||
* from actually being generated.
|
||||
*/
|
||||
@ -2296,7 +2296,7 @@ eclass_useful_for_merging(EquivalenceClass *eclass,
|
||||
|
||||
/*
|
||||
* Note we don't test ec_broken; if we did, we'd need a separate code path
|
||||
* to look through ec_sources. Checking the members anyway is OK as a
|
||||
* to look through ec_sources. Checking the members anyway is OK as a
|
||||
* possibly-overoptimistic heuristic.
|
||||
*/
|
||||
|
||||
|
@ -221,7 +221,7 @@ static Const *string_to_const(const char *str, Oid datatype);
|
||||
* Note: in cases involving LATERAL references in the relation's tlist, it's
|
||||
* possible that rel->lateral_relids is nonempty. Currently, we include
|
||||
* lateral_relids into the parameterization reported for each path, but don't
|
||||
* take it into account otherwise. The fact that any such rels *must* be
|
||||
* take it into account otherwise. The fact that any such rels *must* be
|
||||
* available as parameter sources perhaps should influence our choices of
|
||||
* index quals ... but for now, it doesn't seem worth troubling over.
|
||||
* In particular, comments below about "unparameterized" paths should be read
|
||||
@ -269,7 +269,7 @@ create_index_paths(PlannerInfo *root, RelOptInfo *rel)
|
||||
match_restriction_clauses_to_index(rel, index, &rclauseset);
|
||||
|
||||
/*
|
||||
* Build index paths from the restriction clauses. These will be
|
||||
* Build index paths from the restriction clauses. These will be
|
||||
* non-parameterized paths. Plain paths go directly to add_path(),
|
||||
* bitmap paths are added to bitindexpaths to be handled below.
|
||||
*/
|
||||
@ -277,10 +277,10 @@ create_index_paths(PlannerInfo *root, RelOptInfo *rel)
|
||||
&bitindexpaths);
|
||||
|
||||
/*
|
||||
* Identify the join clauses that can match the index. For the moment
|
||||
* we keep them separate from the restriction clauses. Note that this
|
||||
* Identify the join clauses that can match the index. For the moment
|
||||
* we keep them separate from the restriction clauses. Note that this
|
||||
* step finds only "loose" join clauses that have not been merged into
|
||||
* EquivalenceClasses. Also, collect join OR clauses for later.
|
||||
* EquivalenceClasses. Also, collect join OR clauses for later.
|
||||
*/
|
||||
MemSet(&jclauseset, 0, sizeof(jclauseset));
|
||||
match_join_clauses_to_index(root, rel, index,
|
||||
@ -344,9 +344,9 @@ create_index_paths(PlannerInfo *root, RelOptInfo *rel)
|
||||
|
||||
/*
|
||||
* Likewise, if we found anything usable, generate BitmapHeapPaths for the
|
||||
* most promising combinations of join bitmap index paths. Our strategy
|
||||
* most promising combinations of join bitmap index paths. Our strategy
|
||||
* is to generate one such path for each distinct parameterization seen
|
||||
* among the available bitmap index paths. This may look pretty
|
||||
* among the available bitmap index paths. This may look pretty
|
||||
* expensive, but usually there won't be very many distinct
|
||||
* parameterizations. (This logic is quite similar to that in
|
||||
* consider_index_join_clauses, but we're working with whole paths not
|
||||
@ -462,7 +462,7 @@ consider_index_join_clauses(PlannerInfo *root, RelOptInfo *rel,
|
||||
*
|
||||
* For simplicity in selecting relevant clauses, we represent each set of
|
||||
* outer rels as a maximum set of clause_relids --- that is, the indexed
|
||||
* relation itself is also included in the relids set. considered_relids
|
||||
* relation itself is also included in the relids set. considered_relids
|
||||
* lists all relids sets we've already tried.
|
||||
*/
|
||||
for (indexcol = 0; indexcol < index->ncolumns; indexcol++)
|
||||
@ -551,7 +551,7 @@ consider_index_join_outer_rels(PlannerInfo *root, RelOptInfo *rel,
|
||||
/*
|
||||
* If this clause was derived from an equivalence class, the
|
||||
* clause list may contain other clauses derived from the same
|
||||
* eclass. We should not consider that combining this clause with
|
||||
* eclass. We should not consider that combining this clause with
|
||||
* one of those clauses generates a usefully different
|
||||
* parameterization; so skip if any clause derived from the same
|
||||
* eclass would already have been included when using oldrelids.
|
||||
@ -634,9 +634,9 @@ get_join_index_paths(PlannerInfo *root, RelOptInfo *rel,
|
||||
}
|
||||
|
||||
/*
|
||||
* Add applicable eclass join clauses. The clauses generated for each
|
||||
* Add applicable eclass join clauses. The clauses generated for each
|
||||
* column are redundant (cf generate_implied_equalities_for_column),
|
||||
* so we need at most one. This is the only exception to the general
|
||||
* so we need at most one. This is the only exception to the general
|
||||
* rule of using all available index clauses.
|
||||
*/
|
||||
foreach(lc, eclauseset->indexclauses[indexcol])
|
||||
@ -723,7 +723,7 @@ bms_equal_any(Relids relids, List *relids_list)
|
||||
* bitmap indexpaths are added to *bitindexpaths for later processing.
|
||||
*
|
||||
* This is a fairly simple frontend to build_index_paths(). Its reason for
|
||||
* existence is mainly to handle ScalarArrayOpExpr quals properly. If the
|
||||
* existence is mainly to handle ScalarArrayOpExpr quals properly. If the
|
||||
* index AM supports them natively, we should just include them in simple
|
||||
* index paths. If not, we should exclude them while building simple index
|
||||
* paths, and then make a separate attempt to include them in bitmap paths.
|
||||
@ -737,7 +737,7 @@ get_index_paths(PlannerInfo *root, RelOptInfo *rel,
|
||||
ListCell *lc;
|
||||
|
||||
/*
|
||||
* Build simple index paths using the clauses. Allow ScalarArrayOpExpr
|
||||
* Build simple index paths using the clauses. Allow ScalarArrayOpExpr
|
||||
* clauses only if the index AM supports them natively.
|
||||
*/
|
||||
indexpaths = build_index_paths(root, rel,
|
||||
@ -749,7 +749,7 @@ get_index_paths(PlannerInfo *root, RelOptInfo *rel,
|
||||
* Submit all the ones that can form plain IndexScan plans to add_path. (A
|
||||
* plain IndexPath can represent either a plain IndexScan or an
|
||||
* IndexOnlyScan, but for our purposes here that distinction does not
|
||||
* matter. However, some of the indexes might support only bitmap scans,
|
||||
* matter. However, some of the indexes might support only bitmap scans,
|
||||
* and those we mustn't submit to add_path here.)
|
||||
*
|
||||
* Also, pick out the ones that are usable as bitmap scans. For that, we
|
||||
@ -793,7 +793,7 @@ get_index_paths(PlannerInfo *root, RelOptInfo *rel,
|
||||
* We return a list of paths because (1) this routine checks some cases
|
||||
* that should cause us to not generate any IndexPath, and (2) in some
|
||||
* cases we want to consider both a forward and a backward scan, so as
|
||||
* to obtain both sort orders. Note that the paths are just returned
|
||||
* to obtain both sort orders. Note that the paths are just returned
|
||||
* to the caller and not immediately fed to add_path().
|
||||
*
|
||||
* At top level, useful_predicate should be exactly the index's predOK flag
|
||||
@ -976,7 +976,7 @@ build_index_paths(PlannerInfo *root, RelOptInfo *rel,
|
||||
}
|
||||
|
||||
/*
|
||||
* 3. Check if an index-only scan is possible. If we're not building
|
||||
* 3. Check if an index-only scan is possible. If we're not building
|
||||
* plain indexscans, this isn't relevant since bitmap scans don't support
|
||||
* index data retrieval anyway.
|
||||
*/
|
||||
@ -1081,13 +1081,13 @@ build_paths_for_OR(PlannerInfo *root, RelOptInfo *rel,
|
||||
continue;
|
||||
|
||||
/*
|
||||
* Ignore partial indexes that do not match the query. If a partial
|
||||
* Ignore partial indexes that do not match the query. If a partial
|
||||
* index is marked predOK then we know it's OK. Otherwise, we have to
|
||||
* test whether the added clauses are sufficient to imply the
|
||||
* predicate. If so, we can use the index in the current context.
|
||||
*
|
||||
* We set useful_predicate to true iff the predicate was proven using
|
||||
* the current set of clauses. This is needed to prevent matching a
|
||||
* the current set of clauses. This is needed to prevent matching a
|
||||
* predOK index to an arm of an OR, which would be a legal but
|
||||
* pointlessly inefficient plan. (A better plan will be generated by
|
||||
* just scanning the predOK index alone, no OR.)
|
||||
@ -1270,7 +1270,7 @@ generate_bitmap_or_paths(PlannerInfo *root, RelOptInfo *rel,
|
||||
*
|
||||
* This is a helper for generate_bitmap_or_paths(). We leave OR clauses
|
||||
* in the list whether they are joins or not, since we might be able to
|
||||
* extract a restriction item from an OR list. It's safe to leave such
|
||||
* extract a restriction item from an OR list. It's safe to leave such
|
||||
* clauses in the list because match_clauses_to_index() will ignore them,
|
||||
* so there's no harm in passing such clauses to build_paths_for_OR().
|
||||
*/
|
||||
@ -1298,7 +1298,7 @@ drop_indexable_join_clauses(RelOptInfo *rel, List *clauses)
|
||||
* Given a nonempty list of bitmap paths, AND them into one path.
|
||||
*
|
||||
* This is a nontrivial decision since we can legally use any subset of the
|
||||
* given path set. We want to choose a good tradeoff between selectivity
|
||||
* given path set. We want to choose a good tradeoff between selectivity
|
||||
* and cost of computing the bitmap.
|
||||
*
|
||||
* The result is either a single one of the inputs, or a BitmapAndPath
|
||||
@ -1325,12 +1325,12 @@ choose_bitmap_and(PlannerInfo *root, RelOptInfo *rel, List *paths)
|
||||
* In theory we should consider every nonempty subset of the given paths.
|
||||
* In practice that seems like overkill, given the crude nature of the
|
||||
* estimates, not to mention the possible effects of higher-level AND and
|
||||
* OR clauses. Moreover, it's completely impractical if there are a large
|
||||
* OR clauses. Moreover, it's completely impractical if there are a large
|
||||
* number of paths, since the work would grow as O(2^N).
|
||||
*
|
||||
* As a heuristic, we first check for paths using exactly the same sets of
|
||||
* WHERE clauses + index predicate conditions, and reject all but the
|
||||
* cheapest-to-scan in any such group. This primarily gets rid of indexes
|
||||
* cheapest-to-scan in any such group. This primarily gets rid of indexes
|
||||
* that include the interesting columns but also irrelevant columns. (In
|
||||
* situations where the DBA has gone overboard on creating variant
|
||||
* indexes, this can make for a very large reduction in the number of
|
||||
@ -1350,14 +1350,14 @@ choose_bitmap_and(PlannerInfo *root, RelOptInfo *rel, List *paths)
|
||||
* costsize.c and clausesel.c aren't very smart about redundant clauses.
|
||||
* They will usually double-count the redundant clauses, producing a
|
||||
* too-small selectivity that makes a redundant AND step look like it
|
||||
* reduces the total cost. Perhaps someday that code will be smarter and
|
||||
* reduces the total cost. Perhaps someday that code will be smarter and
|
||||
* we can remove this limitation. (But note that this also defends
|
||||
* against flat-out duplicate input paths, which can happen because
|
||||
* match_join_clauses_to_index will find the same OR join clauses that
|
||||
* create_or_index_quals has pulled OR restriction clauses out of.)
|
||||
*
|
||||
* For the same reason, we reject AND combinations in which an index
|
||||
* predicate clause duplicates another clause. Here we find it necessary
|
||||
* predicate clause duplicates another clause. Here we find it necessary
|
||||
* to be even stricter: we'll reject a partial index if any of its
|
||||
* predicate clauses are implied by the set of WHERE clauses and predicate
|
||||
* clauses used so far. This covers cases such as a condition "x = 42"
|
||||
@ -1420,7 +1420,7 @@ choose_bitmap_and(PlannerInfo *root, RelOptInfo *rel, List *paths)
|
||||
/*
|
||||
* For each surviving index, consider it as an "AND group leader", and see
|
||||
* whether adding on any of the later indexes results in an AND path with
|
||||
* cheaper total cost than before. Then take the cheapest AND group.
|
||||
* cheaper total cost than before. Then take the cheapest AND group.
|
||||
*/
|
||||
for (i = 0; i < npaths; i++)
|
||||
{
|
||||
@ -1753,7 +1753,7 @@ find_indexpath_quals(Path *bitmapqual, List **quals, List **preds)
|
||||
/*
|
||||
* find_list_position
|
||||
* Return the given node's position (counting from 0) in the given
|
||||
* list of nodes. If it's not equal() to any existing list member,
|
||||
* list of nodes. If it's not equal() to any existing list member,
|
||||
* add it at the end, and return that position.
|
||||
*/
|
||||
static int
|
||||
@ -1859,7 +1859,7 @@ check_index_only(RelOptInfo *rel, IndexOptInfo *index)
|
||||
* Since we produce parameterized paths before we've begun to generate join
|
||||
* relations, it's impossible to predict exactly how many times a parameterized
|
||||
* path will be iterated; we don't know the size of the relation that will be
|
||||
* on the outside of the nestloop. However, we should try to account for
|
||||
* on the outside of the nestloop. However, we should try to account for
|
||||
* multiple iterations somehow in costing the path. The heuristic embodied
|
||||
* here is to use the rowcount of the smallest other base relation needed in
|
||||
* the join clauses used by the path. (We could alternatively consider the
|
||||
@ -2074,7 +2074,7 @@ match_clause_to_index(IndexOptInfo *index,
|
||||
* doesn't involve a volatile function or a Var of the index's relation.
|
||||
* In particular, Vars belonging to other relations of the query are
|
||||
* accepted here, since a clause of that form can be used in a
|
||||
* parameterized indexscan. It's the responsibility of higher code levels
|
||||
* parameterized indexscan. It's the responsibility of higher code levels
|
||||
* to manage restriction and join clauses appropriately.
|
||||
*
|
||||
* Note: we do need to check for Vars of the index's relation on the
|
||||
@ -2098,7 +2098,7 @@ match_clause_to_index(IndexOptInfo *index,
|
||||
* It is also possible to match RowCompareExpr clauses to indexes (but
|
||||
* currently, only btree indexes handle this). In this routine we will
|
||||
* report a match if the first column of the row comparison matches the
|
||||
* target index column. This is sufficient to guarantee that some index
|
||||
* target index column. This is sufficient to guarantee that some index
|
||||
* condition can be constructed from the RowCompareExpr --- whether the
|
||||
* remaining columns match the index too is considered in
|
||||
* adjust_rowcompare_for_index().
|
||||
@ -2136,7 +2136,7 @@ match_clause_to_indexcol(IndexOptInfo *index,
|
||||
bool plain_op;
|
||||
|
||||
/*
|
||||
* Never match pseudoconstants to indexes. (Normally this could not
|
||||
* Never match pseudoconstants to indexes. (Normally this could not
|
||||
* happen anyway, since a pseudoconstant clause couldn't contain a Var,
|
||||
* but what if someone builds an expression index on a constant? It's not
|
||||
* totally unreasonable to do so with a partial index, either.)
|
||||
@ -2420,7 +2420,7 @@ match_pathkeys_to_index(IndexOptInfo *index, List *pathkeys,
|
||||
* We allow any column of the index to match each pathkey; they
|
||||
* don't have to match left-to-right as you might expect. This is
|
||||
* correct for GiST, which is the sole existing AM supporting
|
||||
* amcanorderbyop. We might need different logic in future for
|
||||
* amcanorderbyop. We might need different logic in future for
|
||||
* other implementations.
|
||||
*/
|
||||
for (indexcol = 0; indexcol < index->ncolumns; indexcol++)
|
||||
@ -2471,7 +2471,7 @@ match_pathkeys_to_index(IndexOptInfo *index, List *pathkeys,
|
||||
* Note that we currently do not consider the collation of the ordering
|
||||
* operator's result. In practical cases the result type will be numeric
|
||||
* and thus have no collation, and it's not very clear what to match to
|
||||
* if it did have a collation. The index's collation should match the
|
||||
* if it did have a collation. The index's collation should match the
|
||||
* ordering operator's input collation, not its result.
|
||||
*
|
||||
* If successful, return 'clause' as-is if the indexkey is on the left,
|
||||
@ -2721,7 +2721,7 @@ ec_member_matches_indexcol(PlannerInfo *root, RelOptInfo *rel,
|
||||
* if it is true.
|
||||
* 2. A list of expressions in this relation, and a corresponding list of
|
||||
* equality operators. The caller must have already checked that the operators
|
||||
* represent equality. (Note: the operators could be cross-type; the
|
||||
* represent equality. (Note: the operators could be cross-type; the
|
||||
* expressions should correspond to their RHS inputs.)
|
||||
*
|
||||
* The caller need only supply equality conditions arising from joins;
|
||||
@ -2910,7 +2910,7 @@ match_index_to_operand(Node *operand,
|
||||
int indkey;
|
||||
|
||||
/*
|
||||
* Ignore any RelabelType node above the operand. This is needed to be
|
||||
* Ignore any RelabelType node above the operand. This is needed to be
|
||||
* able to apply indexscanning in binary-compatible-operator cases. Note:
|
||||
* we can assume there is at most one RelabelType node;
|
||||
* eval_const_expressions() will have simplified if more than one.
|
||||
@ -2977,10 +2977,10 @@ match_index_to_operand(Node *operand,
|
||||
* indexscan machinery. The key idea is that these operators allow us
|
||||
* to derive approximate indexscan qual clauses, such that any tuples
|
||||
* that pass the operator clause itself must also satisfy the simpler
|
||||
* indexscan condition(s). Then we can use the indexscan machinery
|
||||
* indexscan condition(s). Then we can use the indexscan machinery
|
||||
* to avoid scanning as much of the table as we'd otherwise have to,
|
||||
* while applying the original operator as a qpqual condition to ensure
|
||||
* we deliver only the tuples we want. (In essence, we're using a regular
|
||||
* we deliver only the tuples we want. (In essence, we're using a regular
|
||||
* index as if it were a lossy index.)
|
||||
*
|
||||
* An example of what we're doing is
|
||||
@ -2994,7 +2994,7 @@ match_index_to_operand(Node *operand,
|
||||
*
|
||||
* Another thing that we do with this machinery is to provide special
|
||||
* smarts for "boolean" indexes (that is, indexes on boolean columns
|
||||
* that support boolean equality). We can transform a plain reference
|
||||
* that support boolean equality). We can transform a plain reference
|
||||
* to the indexkey into "indexkey = true", or "NOT indexkey" into
|
||||
* "indexkey = false", so as to make the expression indexable using the
|
||||
* regular index operators. (As of Postgres 8.1, we must do this here
|
||||
@ -3416,7 +3416,7 @@ expand_indexqual_opclause(RestrictInfo *rinfo, Oid opfamily, Oid idxcollation)
|
||||
/*
|
||||
* LIKE and regex operators are not members of any btree index opfamily,
|
||||
* but they can be members of opfamilies for more exotic index types such
|
||||
* as GIN. Therefore, we should only do expansion if the operator is
|
||||
* as GIN. Therefore, we should only do expansion if the operator is
|
||||
* actually not in the opfamily. But checking that requires a syscache
|
||||
* lookup, so it's best to first see if the operator is one we are
|
||||
* interested in.
|
||||
@ -3534,7 +3534,7 @@ expand_indexqual_rowcompare(RestrictInfo *rinfo,
|
||||
* column matches) or a simple OpExpr (if the first-column match is all
|
||||
* there is). In these cases the modified clause is always "<=" or ">="
|
||||
* even when the original was "<" or ">" --- this is necessary to match all
|
||||
* the rows that could match the original. (We are essentially building a
|
||||
* the rows that could match the original. (We are essentially building a
|
||||
* lossy version of the row comparison when we do this.)
|
||||
*
|
||||
* *indexcolnos receives an integer list of the index column numbers (zero
|
||||
|
@ -107,7 +107,7 @@ add_paths_to_joinrel(PlannerInfo *root,
|
||||
|
||||
/*
|
||||
* If it's SEMI or ANTI join, compute correction factors for cost
|
||||
* estimation. These will be the same for all paths.
|
||||
* estimation. These will be the same for all paths.
|
||||
*/
|
||||
if (jointype == JOIN_SEMI || jointype == JOIN_ANTI)
|
||||
compute_semi_anti_join_factors(root, outerrel, innerrel,
|
||||
@ -122,7 +122,7 @@ add_paths_to_joinrel(PlannerInfo *root,
|
||||
* to the parameter source rel instead of joining to the other input rel.
|
||||
* This restriction reduces the number of parameterized paths we have to
|
||||
* deal with at higher join levels, without compromising the quality of
|
||||
* the resulting plan. We express the restriction as a Relids set that
|
||||
* the resulting plan. We express the restriction as a Relids set that
|
||||
* must overlap the parameterization of any proposed join path.
|
||||
*/
|
||||
foreach(lc, root->join_info_list)
|
||||
@ -155,7 +155,7 @@ add_paths_to_joinrel(PlannerInfo *root,
|
||||
* However, when a LATERAL subquery is involved, we have to be a bit
|
||||
* laxer, because there will simply not be any paths for the joinrel that
|
||||
* aren't parameterized by whatever the subquery is parameterized by,
|
||||
* unless its parameterization is resolved within the joinrel. Hence, add
|
||||
* unless its parameterization is resolved within the joinrel. Hence, add
|
||||
* to param_source_rels anything that is laterally referenced in either
|
||||
* input and is not in the join already.
|
||||
*/
|
||||
@ -208,7 +208,7 @@ add_paths_to_joinrel(PlannerInfo *root,
|
||||
|
||||
/*
|
||||
* 1. Consider mergejoin paths where both relations must be explicitly
|
||||
* sorted. Skip this if we can't mergejoin.
|
||||
* sorted. Skip this if we can't mergejoin.
|
||||
*/
|
||||
if (mergejoin_allowed)
|
||||
sort_inner_and_outer(root, joinrel, outerrel, innerrel,
|
||||
@ -233,7 +233,7 @@ add_paths_to_joinrel(PlannerInfo *root,
|
||||
|
||||
/*
|
||||
* 3. Consider paths where the inner relation need not be explicitly
|
||||
* sorted. This includes mergejoins only (nestloops were already built in
|
||||
* sorted. This includes mergejoins only (nestloops were already built in
|
||||
* match_unsorted_outer).
|
||||
*
|
||||
* Diked out as redundant 2/13/2000 -- tgl. There isn't any really
|
||||
@ -507,7 +507,7 @@ try_hashjoin_path(PlannerInfo *root,
|
||||
* We already know that the clause is a binary opclause referencing only the
|
||||
* rels in the current join. The point here is to check whether it has the
|
||||
* form "outerrel_expr op innerrel_expr" or "innerrel_expr op outerrel_expr",
|
||||
* rather than mixing outer and inner vars on either side. If it matches,
|
||||
* rather than mixing outer and inner vars on either side. If it matches,
|
||||
* we set the transient flag outer_is_left to identify which side is which.
|
||||
*/
|
||||
static inline bool
|
||||
@ -572,7 +572,7 @@ sort_inner_and_outer(PlannerInfo *root,
|
||||
* sort.
|
||||
*
|
||||
* This function intentionally does not consider parameterized input
|
||||
* paths, except when the cheapest-total is parameterized. If we did so,
|
||||
* paths, except when the cheapest-total is parameterized. If we did so,
|
||||
* we'd have a combinatorial explosion of mergejoin paths of dubious
|
||||
* value. This interacts with decisions elsewhere that also discriminate
|
||||
* against mergejoins with parameterized inputs; see comments in
|
||||
@ -619,7 +619,7 @@ sort_inner_and_outer(PlannerInfo *root,
|
||||
*
|
||||
* Actually, it's not quite true that every mergeclause ordering will
|
||||
* generate a different path order, because some of the clauses may be
|
||||
* partially redundant (refer to the same EquivalenceClasses). Therefore,
|
||||
* partially redundant (refer to the same EquivalenceClasses). Therefore,
|
||||
* what we do is convert the mergeclause list to a list of canonical
|
||||
* pathkeys, and then consider different orderings of the pathkeys.
|
||||
*
|
||||
@ -713,7 +713,7 @@ sort_inner_and_outer(PlannerInfo *root,
|
||||
* cheapest-total inner-indexscan path (if any), and one on the
|
||||
* cheapest-startup inner-indexscan path (if different).
|
||||
*
|
||||
* We also consider mergejoins if mergejoin clauses are available. We have
|
||||
* We also consider mergejoins if mergejoin clauses are available. We have
|
||||
* two ways to generate the inner path for a mergejoin: sort the cheapest
|
||||
* inner path, or use an inner path that is already suitably ordered for the
|
||||
* merge. If we have several mergeclauses, it could be that there is no inner
|
||||
@ -845,8 +845,8 @@ match_unsorted_outer(PlannerInfo *root,
|
||||
|
||||
/*
|
||||
* If we need to unique-ify the outer path, it's pointless to consider
|
||||
* any but the cheapest outer. (XXX we don't consider parameterized
|
||||
* outers, nor inners, for unique-ified cases. Should we?)
|
||||
* any but the cheapest outer. (XXX we don't consider parameterized
|
||||
* outers, nor inners, for unique-ified cases. Should we?)
|
||||
*/
|
||||
if (save_jointype == JOIN_UNIQUE_OUTER)
|
||||
{
|
||||
@ -887,7 +887,7 @@ match_unsorted_outer(PlannerInfo *root,
|
||||
{
|
||||
/*
|
||||
* Consider nestloop joins using this outer path and various
|
||||
* available paths for the inner relation. We consider the
|
||||
* available paths for the inner relation. We consider the
|
||||
* cheapest-total paths for each available parameterization of the
|
||||
* inner relation, including the unparameterized case.
|
||||
*/
|
||||
@ -1042,7 +1042,7 @@ match_unsorted_outer(PlannerInfo *root,
|
||||
|
||||
/*
|
||||
* Look for an inner path ordered well enough for the first
|
||||
* 'sortkeycnt' innersortkeys. NB: trialsortkeys list is modified
|
||||
* 'sortkeycnt' innersortkeys. NB: trialsortkeys list is modified
|
||||
* destructively, which is why we made a copy...
|
||||
*/
|
||||
trialsortkeys = list_truncate(trialsortkeys, sortkeycnt);
|
||||
|
@ -213,7 +213,7 @@ join_search_one_level(PlannerInfo *root, int level)
|
||||
|
||||
/*----------
|
||||
* When special joins are involved, there may be no legal way
|
||||
* to make an N-way join for some values of N. For example consider
|
||||
* to make an N-way join for some values of N. For example consider
|
||||
*
|
||||
* SELECT ... FROM t1 WHERE
|
||||
* x IN (SELECT ... FROM t2,t3 WHERE ...) AND
|
||||
@ -337,7 +337,7 @@ join_is_legal(PlannerInfo *root, RelOptInfo *rel1, RelOptInfo *rel2,
|
||||
ListCell *l;
|
||||
|
||||
/*
|
||||
* Ensure output params are set on failure return. This is just to
|
||||
* Ensure output params are set on failure return. This is just to
|
||||
* suppress uninitialized-variable warnings from overly anal compilers.
|
||||
*/
|
||||
*sjinfo_p = NULL;
|
||||
@ -345,7 +345,7 @@ join_is_legal(PlannerInfo *root, RelOptInfo *rel1, RelOptInfo *rel2,
|
||||
|
||||
/*
|
||||
* If we have any special joins, the proposed join might be illegal; and
|
||||
* in any case we have to determine its join type. Scan the join info
|
||||
* in any case we have to determine its join type. Scan the join info
|
||||
* list for conflicts.
|
||||
*/
|
||||
match_sjinfo = NULL;
|
||||
@ -609,7 +609,7 @@ make_join_rel(PlannerInfo *root, RelOptInfo *rel1, RelOptInfo *rel2)
|
||||
|
||||
/*
|
||||
* If it's a plain inner join, then we won't have found anything in
|
||||
* join_info_list. Make up a SpecialJoinInfo so that selectivity
|
||||
* join_info_list. Make up a SpecialJoinInfo so that selectivity
|
||||
* estimation functions will know what's being joined.
|
||||
*/
|
||||
if (sjinfo == NULL)
|
||||
@ -916,7 +916,7 @@ have_join_order_restriction(PlannerInfo *root,
|
||||
*
|
||||
* Essentially, this tests whether have_join_order_restriction() could
|
||||
* succeed with this rel and some other one. It's OK if we sometimes
|
||||
* say "true" incorrectly. (Therefore, we don't bother with the relatively
|
||||
* say "true" incorrectly. (Therefore, we don't bother with the relatively
|
||||
* expensive has_legal_joinclause test.)
|
||||
*/
|
||||
static bool
|
||||
@ -1027,7 +1027,7 @@ is_dummy_rel(RelOptInfo *rel)
|
||||
* dummy.
|
||||
*
|
||||
* Also, when called during GEQO join planning, we are in a short-lived
|
||||
* memory context. We must make sure that the dummy path attached to a
|
||||
* memory context. We must make sure that the dummy path attached to a
|
||||
* baserel survives the GEQO cycle, else the baserel is trashed for future
|
||||
* GEQO cycles. On the other hand, when we are marking a joinrel during GEQO,
|
||||
* we don't want the dummy path to clutter the main planning context. Upshot
|
||||
|
@ -41,7 +41,7 @@
|
||||
*
|
||||
* The added quals are partially redundant with the original OR, and therefore
|
||||
* will cause the size of the joinrel to be underestimated when it is finally
|
||||
* formed. (This would be true of a full transformation to CNF as well; the
|
||||
* formed. (This would be true of a full transformation to CNF as well; the
|
||||
* fault is not really in the transformation, but in clauselist_selectivity's
|
||||
* inability to recognize redundant conditions.) To minimize the collateral
|
||||
* damage, we want to minimize the number of quals added. Therefore we do
|
||||
@ -56,7 +56,7 @@
|
||||
* it is finally formed. This is a MAJOR HACK: it depends on the fact
|
||||
* that clause selectivities are cached and on the fact that the same
|
||||
* RestrictInfo node will appear in every joininfo list that might be used
|
||||
* when the joinrel is formed. And it probably isn't right in cases where
|
||||
* when the joinrel is formed. And it probably isn't right in cases where
|
||||
* the size estimation is nonlinear (i.e., outer and IN joins). But it
|
||||
* beats not doing anything.
|
||||
*
|
||||
@ -109,7 +109,7 @@ create_or_index_quals(PlannerInfo *root, RelOptInfo *rel)
|
||||
* Use the generate_bitmap_or_paths() machinery to estimate the
|
||||
* value of each OR clause. We can use regular restriction
|
||||
* clauses along with the OR clause contents to generate
|
||||
* indexquals. We pass restriction_only = true so that any
|
||||
* indexquals. We pass restriction_only = true so that any
|
||||
* sub-clauses that are actually joins will be ignored.
|
||||
*/
|
||||
List *orpaths;
|
||||
|
@ -46,7 +46,7 @@ static bool right_merge_direction(PlannerInfo *root, PathKey *pathkey);
|
||||
* entry if there's not one already.
|
||||
*
|
||||
* Note that this function must not be used until after we have completed
|
||||
* merging EquivalenceClasses. (We don't try to enforce that here; instead,
|
||||
* merging EquivalenceClasses. (We don't try to enforce that here; instead,
|
||||
* equivclass.c will complain if a merge occurs after root->canon_pathkeys
|
||||
* has become nonempty.)
|
||||
*/
|
||||
@ -120,7 +120,7 @@ make_canonical_pathkey(PlannerInfo *root,
|
||||
*
|
||||
* Both the given pathkey and the list members must be canonical for this
|
||||
* to work properly, but that's okay since we no longer ever construct any
|
||||
* non-canonical pathkeys. (Note: the notion of a pathkey *list* being
|
||||
* non-canonical pathkeys. (Note: the notion of a pathkey *list* being
|
||||
* canonical includes the additional requirement of no redundant entries,
|
||||
* which is exactly what we are checking for here.)
|
||||
*
|
||||
@ -162,7 +162,7 @@ pathkey_is_redundant(PathKey *new_pathkey, List *pathkeys)
|
||||
*
|
||||
* If rel is not NULL, it identifies a specific relation we're considering
|
||||
* a path for, and indicates that child EC members for that relation can be
|
||||
* considered. Otherwise child members are ignored. (See the comments for
|
||||
* considered. Otherwise child members are ignored. (See the comments for
|
||||
* get_eclass_for_sort_expr.)
|
||||
*
|
||||
* create_it is TRUE if we should create any missing EquivalenceClass
|
||||
@ -192,7 +192,7 @@ make_pathkey_from_sortinfo(PlannerInfo *root,
|
||||
/*
|
||||
* EquivalenceClasses need to contain opfamily lists based on the family
|
||||
* membership of mergejoinable equality operators, which could belong to
|
||||
* more than one opfamily. So we have to look up the opfamily's equality
|
||||
* more than one opfamily. So we have to look up the opfamily's equality
|
||||
* operator and get its membership.
|
||||
*/
|
||||
equality_op = get_opfamily_member(opfamily,
|
||||
@ -355,7 +355,7 @@ get_cheapest_path_for_pathkeys(List *paths, List *pathkeys,
|
||||
|
||||
/*
|
||||
* Since cost comparison is a lot cheaper than pathkey comparison, do
|
||||
* that first. (XXX is that still true?)
|
||||
* that first. (XXX is that still true?)
|
||||
*/
|
||||
if (matched_path != NULL &&
|
||||
compare_path_costs(matched_path, path, cost_criterion) <= 0)
|
||||
@ -397,7 +397,7 @@ get_cheapest_fractional_path_for_pathkeys(List *paths,
|
||||
|
||||
/*
|
||||
* Since cost comparison is a lot cheaper than pathkey comparison, do
|
||||
* that first. (XXX is that still true?)
|
||||
* that first. (XXX is that still true?)
|
||||
*/
|
||||
if (matched_path != NULL &&
|
||||
compare_fractional_path_costs(matched_path, path, fraction) <= 0)
|
||||
@ -504,7 +504,7 @@ build_index_pathkeys(PlannerInfo *root,
|
||||
/*
|
||||
* convert_subquery_pathkeys
|
||||
* Build a pathkeys list that describes the ordering of a subquery's
|
||||
* result, in the terms of the outer query. This is essentially a
|
||||
* result, in the terms of the outer query. This is essentially a
|
||||
* task of conversion.
|
||||
*
|
||||
* 'rel': outer query's RelOptInfo for the subquery relation.
|
||||
@ -557,7 +557,7 @@ convert_subquery_pathkeys(PlannerInfo *root, RelOptInfo *rel,
|
||||
|
||||
/*
|
||||
* Note: it might look funny to be setting sortref = 0 for a
|
||||
* reference to a volatile sub_eclass. However, the
|
||||
* reference to a volatile sub_eclass. However, the
|
||||
* expression is *not* volatile in the outer query: it's just
|
||||
* a Var referencing whatever the subquery emitted. (IOW, the
|
||||
* outer query isn't going to re-execute the volatile
|
||||
@ -594,7 +594,7 @@ convert_subquery_pathkeys(PlannerInfo *root, RelOptInfo *rel,
|
||||
/*
|
||||
* Otherwise, the sub_pathkey's EquivalenceClass could contain
|
||||
* multiple elements (representing knowledge that multiple items
|
||||
* are effectively equal). Each element might match none, one, or
|
||||
* are effectively equal). Each element might match none, one, or
|
||||
* more of the output columns that are visible to the outer query.
|
||||
* This means we may have multiple possible representations of the
|
||||
* sub_pathkey in the context of the outer query. Ideally we
|
||||
@ -822,7 +822,7 @@ make_pathkeys_for_sortclauses(PlannerInfo *root,
|
||||
* right sides.
|
||||
*
|
||||
* Note this is called before EC merging is complete, so the links won't
|
||||
* necessarily point to canonical ECs. Before they are actually used for
|
||||
* necessarily point to canonical ECs. Before they are actually used for
|
||||
* anything, update_mergeclause_eclasses must be called to ensure that
|
||||
* they've been updated to point to canonical ECs.
|
||||
*/
|
||||
@ -956,7 +956,7 @@ find_mergeclauses_for_pathkeys(PlannerInfo *root,
|
||||
* It's possible that multiple matching clauses might have different
|
||||
* ECs on the other side, in which case the order we put them into our
|
||||
* result makes a difference in the pathkeys required for the other
|
||||
* input path. However this routine hasn't got any info about which
|
||||
* input path. However this routine hasn't got any info about which
|
||||
* order would be best, so we don't worry about that.
|
||||
*
|
||||
* It's also possible that the selected mergejoin clauses produce
|
||||
@ -987,7 +987,7 @@ find_mergeclauses_for_pathkeys(PlannerInfo *root,
|
||||
|
||||
/*
|
||||
* If we didn't find a mergeclause, we're done --- any additional
|
||||
* sort-key positions in the pathkeys are useless. (But we can still
|
||||
* sort-key positions in the pathkeys are useless. (But we can still
|
||||
* mergejoin if we found at least one mergeclause.)
|
||||
*/
|
||||
if (matched_restrictinfos == NIL)
|
||||
@ -1019,7 +1019,7 @@ find_mergeclauses_for_pathkeys(PlannerInfo *root,
|
||||
* Returns a pathkeys list that can be applied to the outer relation.
|
||||
*
|
||||
* Since we assume here that a sort is required, there is no particular use
|
||||
* in matching any available ordering of the outerrel. (joinpath.c has an
|
||||
* in matching any available ordering of the outerrel. (joinpath.c has an
|
||||
* entirely separate code path for considering sort-free mergejoins.) Rather,
|
||||
* it's interesting to try to match the requested query_pathkeys so that a
|
||||
* second output sort may be avoided; and failing that, we try to list "more
|
||||
@ -1350,7 +1350,7 @@ pathkeys_useful_for_merging(PlannerInfo *root, RelOptInfo *rel, List *pathkeys)
|
||||
|
||||
/*
|
||||
* If we didn't find a mergeclause, we're done --- any additional
|
||||
* sort-key positions in the pathkeys are useless. (But we can still
|
||||
* sort-key positions in the pathkeys are useless. (But we can still
|
||||
* mergejoin if we found at least one mergeclause.)
|
||||
*/
|
||||
if (matched)
|
||||
@ -1380,7 +1380,7 @@ right_merge_direction(PlannerInfo *root, PathKey *pathkey)
|
||||
pathkey->pk_opfamily == query_pathkey->pk_opfamily)
|
||||
{
|
||||
/*
|
||||
* Found a matching query sort column. Prefer this pathkey's
|
||||
* Found a matching query sort column. Prefer this pathkey's
|
||||
* direction iff it matches. Note that we ignore pk_nulls_first,
|
||||
* which means that a sort might be needed anyway ... but we still
|
||||
* want to prefer only one of the two possible directions, and we
|
||||
@ -1456,13 +1456,13 @@ truncate_useless_pathkeys(PlannerInfo *root,
|
||||
* useful according to truncate_useless_pathkeys().
|
||||
*
|
||||
* This is a cheap test that lets us skip building pathkeys at all in very
|
||||
* simple queries. It's OK to err in the direction of returning "true" when
|
||||
* simple queries. It's OK to err in the direction of returning "true" when
|
||||
* there really aren't any usable pathkeys, but erring in the other direction
|
||||
* is bad --- so keep this in sync with the routines above!
|
||||
*
|
||||
* We could make the test more complex, for example checking to see if any of
|
||||
* the joinclauses are really mergejoinable, but that likely wouldn't win
|
||||
* often enough to repay the extra cycles. Queries with neither a join nor
|
||||
* often enough to repay the extra cycles. Queries with neither a join nor
|
||||
* a sort are reasonably common, though, so this much work seems worthwhile.
|
||||
*/
|
||||
bool
|
||||
|
@ -19,7 +19,7 @@
|
||||
* representation all the way through to execution.
|
||||
*
|
||||
* There is currently no special support for joins involving CTID; in
|
||||
* particular nothing corresponding to best_inner_indexscan(). Since it's
|
||||
* particular nothing corresponding to best_inner_indexscan(). Since it's
|
||||
* not very useful to store TIDs of one table in another table, there
|
||||
* doesn't seem to be enough use-case to justify adding a lot of code
|
||||
* for that.
|
||||
@ -57,7 +57,7 @@ static List *TidQualFromRestrictinfo(List *restrictinfo, int varno);
|
||||
* or
|
||||
* pseudoconstant = CTID
|
||||
*
|
||||
* We check that the CTID Var belongs to relation "varno". That is probably
|
||||
* We check that the CTID Var belongs to relation "varno". That is probably
|
||||
* redundant considering this is only applied to restriction clauses, but
|
||||
* let's be safe.
|
||||
*/
|
||||
|
@ -40,7 +40,7 @@ static List *remove_rel_from_joinlist(List *joinlist, int relid, int *nremoved);
|
||||
* Check for relations that don't actually need to be joined at all,
|
||||
* and remove them from the query.
|
||||
*
|
||||
* We are passed the current joinlist and return the updated list. Other
|
||||
* We are passed the current joinlist and return the updated list. Other
|
||||
* data structures that have to be updated are accessible via "root".
|
||||
*/
|
||||
List *
|
||||
@ -90,7 +90,7 @@ restart:
|
||||
* Restart the scan. This is necessary to ensure we find all
|
||||
* removable joins independently of ordering of the join_info_list
|
||||
* (note that removal of attr_needed bits may make a join appear
|
||||
* removable that did not before). Also, since we just deleted the
|
||||
* removable that did not before). Also, since we just deleted the
|
||||
* current list cell, we'd have to have some kluge to continue the
|
||||
* list scan anyway.
|
||||
*/
|
||||
@ -107,7 +107,7 @@ restart:
|
||||
* We already know that the clause is a binary opclause referencing only the
|
||||
* rels in the current join. The point here is to check whether it has the
|
||||
* form "outerrel_expr op innerrel_expr" or "innerrel_expr op outerrel_expr",
|
||||
* rather than mixing outer and inner vars on either side. If it matches,
|
||||
* rather than mixing outer and inner vars on either side. If it matches,
|
||||
* we set the transient flag outer_is_left to identify which side is which.
|
||||
*/
|
||||
static inline bool
|
||||
@ -154,7 +154,7 @@ join_is_removable(PlannerInfo *root, SpecialJoinInfo *sjinfo)
|
||||
|
||||
/*
|
||||
* Currently, we only know how to remove left joins to a baserel with
|
||||
* unique indexes. We can check most of these criteria pretty trivially
|
||||
* unique indexes. We can check most of these criteria pretty trivially
|
||||
* to avoid doing useless extra work. But checking whether any of the
|
||||
* indexes are unique would require iterating over the indexlist, so for
|
||||
* now we just make sure there are indexes of some sort or other. If none
|
||||
@ -203,7 +203,7 @@ join_is_removable(PlannerInfo *root, SpecialJoinInfo *sjinfo)
|
||||
* actually references some inner-rel attributes; but the correct check
|
||||
* for that is relatively expensive, so we first check against ph_eval_at,
|
||||
* which must mention the inner rel if the PHV uses any inner-rel attrs as
|
||||
* non-lateral references. Note that if the PHV's syntactic scope is just
|
||||
* non-lateral references. Note that if the PHV's syntactic scope is just
|
||||
* the inner rel, we can't drop the rel even if the PHV is variable-free.
|
||||
*/
|
||||
foreach(l, root->placeholder_list)
|
||||
|
@ -173,7 +173,7 @@ static Material *make_material(Plan *lefttree);
|
||||
/*
|
||||
* create_plan
|
||||
* Creates the access plan for a query by recursively processing the
|
||||
* desired tree of pathnodes, starting at the node 'best_path'. For
|
||||
* desired tree of pathnodes, starting at the node 'best_path'. For
|
||||
* every pathnode found, we create a corresponding plan node containing
|
||||
* appropriate id, target list, and qualification information.
|
||||
*
|
||||
@ -288,7 +288,7 @@ create_scan_plan(PlannerInfo *root, Path *best_path)
|
||||
/*
|
||||
* For table scans, rather than using the relation targetlist (which is
|
||||
* only those Vars actually needed by the query), we prefer to generate a
|
||||
* tlist containing all Vars in order. This will allow the executor to
|
||||
* tlist containing all Vars in order. This will allow the executor to
|
||||
* optimize away projection of the table tuples, if possible. (Note that
|
||||
* planner.c may replace the tlist we generate here, forcing projection to
|
||||
* occur.)
|
||||
@ -525,7 +525,7 @@ use_physical_tlist(PlannerInfo *root, RelOptInfo *rel)
|
||||
*
|
||||
* If the plan node immediately above a scan would prefer to get only
|
||||
* needed Vars and not a physical tlist, it must call this routine to
|
||||
* undo the decision made by use_physical_tlist(). Currently, Hash, Sort,
|
||||
* undo the decision made by use_physical_tlist(). Currently, Hash, Sort,
|
||||
* and Material nodes want this, so they don't have to store useless columns.
|
||||
*/
|
||||
static void
|
||||
@ -656,7 +656,7 @@ create_join_plan(PlannerInfo *root, JoinPath *best_path)
|
||||
|
||||
/*
|
||||
* * Expensive function pullups may have pulled local predicates * into
|
||||
* this path node. Put them in the qpqual of the plan node. * JMH,
|
||||
* this path node. Put them in the qpqual of the plan node. * JMH,
|
||||
* 6/15/92
|
||||
*/
|
||||
if (get_loc_restrictinfo(best_path) != NIL)
|
||||
@ -1172,10 +1172,10 @@ create_indexscan_plan(PlannerInfo *root,
|
||||
/*
|
||||
* The qpqual list must contain all restrictions not automatically handled
|
||||
* by the index, other than pseudoconstant clauses which will be handled
|
||||
* by a separate gating plan node. All the predicates in the indexquals
|
||||
* by a separate gating plan node. All the predicates in the indexquals
|
||||
* will be checked (either by the index itself, or by nodeIndexscan.c),
|
||||
* but if there are any "special" operators involved then they must be
|
||||
* included in qpqual. The upshot is that qpqual must contain
|
||||
* included in qpqual. The upshot is that qpqual must contain
|
||||
* scan_clauses minus whatever appears in indexquals.
|
||||
*
|
||||
* In normal cases simple pointer equality checks will be enough to spot
|
||||
@ -1312,15 +1312,15 @@ create_bitmap_scan_plan(PlannerInfo *root,
|
||||
/*
|
||||
* The qpqual list must contain all restrictions not automatically handled
|
||||
* by the index, other than pseudoconstant clauses which will be handled
|
||||
* by a separate gating plan node. All the predicates in the indexquals
|
||||
* by a separate gating plan node. All the predicates in the indexquals
|
||||
* will be checked (either by the index itself, or by
|
||||
* nodeBitmapHeapscan.c), but if there are any "special" operators
|
||||
* involved then they must be added to qpqual. The upshot is that qpqual
|
||||
* involved then they must be added to qpqual. The upshot is that qpqual
|
||||
* must contain scan_clauses minus whatever appears in indexquals.
|
||||
*
|
||||
* This loop is similar to the comparable code in create_indexscan_plan(),
|
||||
* but with some differences because it has to compare the scan clauses to
|
||||
* stripped (no RestrictInfos) indexquals. See comments there for more
|
||||
* stripped (no RestrictInfos) indexquals. See comments there for more
|
||||
* info.
|
||||
*
|
||||
* In normal cases simple equal() checks will be enough to spot duplicate
|
||||
@ -1365,7 +1365,7 @@ create_bitmap_scan_plan(PlannerInfo *root,
|
||||
|
||||
/*
|
||||
* When dealing with special operators, we will at this point have
|
||||
* duplicate clauses in qpqual and bitmapqualorig. We may as well drop
|
||||
* duplicate clauses in qpqual and bitmapqualorig. We may as well drop
|
||||
* 'em from bitmapqualorig, since there's no point in making the tests
|
||||
* twice.
|
||||
*/
|
||||
@ -1480,7 +1480,7 @@ create_bitmap_subplan(PlannerInfo *root, Path *bitmapqual,
|
||||
/*
|
||||
* Here, we only detect qual-free subplans. A qual-free subplan would
|
||||
* cause us to generate "... OR true ..." which we may as well reduce
|
||||
* to just "true". We do not try to eliminate redundant subclauses
|
||||
* to just "true". We do not try to eliminate redundant subclauses
|
||||
* because (a) it's not as likely as in the AND case, and (b) we might
|
||||
* well be working with hundreds or even thousands of OR conditions,
|
||||
* perhaps from a long IN list. The performance of list_append_unique
|
||||
@ -1576,7 +1576,7 @@ create_bitmap_subplan(PlannerInfo *root, Path *bitmapqual,
|
||||
/*
|
||||
* We know that the index predicate must have been implied by the
|
||||
* query condition as a whole, but it may or may not be implied by
|
||||
* the conditions that got pushed into the bitmapqual. Avoid
|
||||
* the conditions that got pushed into the bitmapqual. Avoid
|
||||
* generating redundant conditions.
|
||||
*/
|
||||
if (!predicate_implied_by(list_make1(pred), ipath->indexclauses))
|
||||
@ -1963,14 +1963,14 @@ create_foreignscan_plan(PlannerInfo *root, ForeignPath *best_path,
|
||||
Assert(rte->rtekind == RTE_RELATION);
|
||||
|
||||
/*
|
||||
* Sort clauses into best execution order. We do this first since the FDW
|
||||
* Sort clauses into best execution order. We do this first since the FDW
|
||||
* might have more info than we do and wish to adjust the ordering.
|
||||
*/
|
||||
scan_clauses = order_qual_clauses(root, scan_clauses);
|
||||
|
||||
/*
|
||||
* Let the FDW perform its processing on the restriction clauses and
|
||||
* generate the plan node. Note that the FDW might remove restriction
|
||||
* generate the plan node. Note that the FDW might remove restriction
|
||||
* clauses that it intends to execute remotely, or even add more (if it
|
||||
* has selected some join clauses for remote use but also wants them
|
||||
* rechecked locally).
|
||||
@ -2624,7 +2624,7 @@ replace_nestloop_params_mutator(Node *node, PlannerInfo *root)
|
||||
*
|
||||
* Note that after doing this, we might have different
|
||||
* representations of the contents of the same PHV in different
|
||||
* parts of the plan tree. This is OK because equal() will just
|
||||
* parts of the plan tree. This is OK because equal() will just
|
||||
* match on phid/phlevelsup, so setrefs.c will still recognize an
|
||||
* upper-level reference to a lower-level copy of the same PHV.
|
||||
*/
|
||||
@ -2802,7 +2802,7 @@ fix_indexqual_references(PlannerInfo *root, IndexPath *index_path)
|
||||
|
||||
/*
|
||||
* Check to see if the indexkey is on the right; if so, commute
|
||||
* the clause. The indexkey should be the side that refers to
|
||||
* the clause. The indexkey should be the side that refers to
|
||||
* (only) the base relation.
|
||||
*/
|
||||
if (!bms_equal(rinfo->left_relids, index->rel->relids))
|
||||
@ -2896,7 +2896,7 @@ fix_indexqual_references(PlannerInfo *root, IndexPath *index_path)
|
||||
*
|
||||
* This is a simplified version of fix_indexqual_references. The input does
|
||||
* not have RestrictInfo nodes, and we assume that indxpath.c already
|
||||
* commuted the clauses to put the index keys on the left. Also, we don't
|
||||
* commuted the clauses to put the index keys on the left. Also, we don't
|
||||
* bother to support any cases except simple OpExprs, since nothing else
|
||||
* is allowed for ordering operators.
|
||||
*/
|
||||
@ -3135,7 +3135,7 @@ order_qual_clauses(PlannerInfo *root, List *clauses)
|
||||
|
||||
/*
|
||||
* Sort. We don't use qsort() because it's not guaranteed stable for
|
||||
* equal keys. The expected number of entries is small enough that a
|
||||
* equal keys. The expected number of entries is small enough that a
|
||||
* simple insertion sort should be good enough.
|
||||
*/
|
||||
for (i = 1; i < nitems; i++)
|
||||
@ -3786,7 +3786,7 @@ make_sort(PlannerInfo *root, Plan *lefttree, int numCols,
|
||||
* prepare_sort_from_pathkeys
|
||||
* Prepare to sort according to given pathkeys
|
||||
*
|
||||
* This is used to set up for both Sort and MergeAppend nodes. It calculates
|
||||
* This is used to set up for both Sort and MergeAppend nodes. It calculates
|
||||
* the executor's representation of the sort key information, and adjusts the
|
||||
* plan targetlist if needed to add resjunk sort columns.
|
||||
*
|
||||
@ -3799,7 +3799,7 @@ make_sort(PlannerInfo *root, Plan *lefttree, int numCols,
|
||||
*
|
||||
* We must convert the pathkey information into arrays of sort key column
|
||||
* numbers, sort operator OIDs, collation OIDs, and nulls-first flags,
|
||||
* which is the representation the executor wants. These are returned into
|
||||
* which is the representation the executor wants. These are returned into
|
||||
* the output parameters *p_numsortkeys etc.
|
||||
*
|
||||
* When looking for matches to an EquivalenceClass's members, we will only
|
||||
@ -4241,7 +4241,7 @@ make_material(Plan *lefttree)
|
||||
* materialize_finished_plan: stick a Material node atop a completed plan
|
||||
*
|
||||
* There are a couple of places where we want to attach a Material node
|
||||
* after completion of subquery_planner(). This currently requires hackery.
|
||||
* after completion of subquery_planner(). This currently requires hackery.
|
||||
* Since subquery_planner has already run SS_finalize_plan on the subplan
|
||||
* tree, we have to kluge up parameter lists for the Material node.
|
||||
* Possibly this could be fixed by postponing SS_finalize_plan processing
|
||||
@ -4447,7 +4447,7 @@ make_group(PlannerInfo *root,
|
||||
|
||||
/*
|
||||
* distinctList is a list of SortGroupClauses, identifying the targetlist items
|
||||
* that should be considered by the Unique filter. The input path must
|
||||
* that should be considered by the Unique filter. The input path must
|
||||
* already be sorted accordingly.
|
||||
*/
|
||||
Unique *
|
||||
@ -4465,7 +4465,7 @@ make_unique(Plan *lefttree, List *distinctList)
|
||||
|
||||
/*
|
||||
* Charge one cpu_operator_cost per comparison per input tuple. We assume
|
||||
* all columns get compared at most of the tuples. (XXX probably this is
|
||||
* all columns get compared at most of the tuples. (XXX probably this is
|
||||
* an overestimate.)
|
||||
*/
|
||||
plan->total_cost += cpu_operator_cost * plan->plan_rows * numCols;
|
||||
@ -4721,7 +4721,7 @@ make_result(PlannerInfo *root,
|
||||
* Build a ModifyTable plan node
|
||||
*
|
||||
* Currently, we don't charge anything extra for the actual table modification
|
||||
* work, nor for the RETURNING expressions if any. It would only be window
|
||||
* work, nor for the RETURNING expressions if any. It would only be window
|
||||
* dressing, since these are always top-level nodes and there is no way for
|
||||
* the costs to change any higher-level planning choices. But we might want
|
||||
* to make it look better sometime.
|
||||
|
@ -87,12 +87,12 @@ static void check_hashjoinable(RestrictInfo *restrictinfo);
|
||||
* appearing in the jointree.
|
||||
*
|
||||
* The initial invocation must pass root->parse->jointree as the value of
|
||||
* jtnode. Internally, the function recurses through the jointree.
|
||||
* jtnode. Internally, the function recurses through the jointree.
|
||||
*
|
||||
* At the end of this process, there should be one baserel RelOptInfo for
|
||||
* every non-join RTE that is used in the query. Therefore, this routine
|
||||
* is the only place that should call build_simple_rel with reloptkind
|
||||
* RELOPT_BASEREL. (Note: build_simple_rel recurses internally to build
|
||||
* RELOPT_BASEREL. (Note: build_simple_rel recurses internally to build
|
||||
* "other rel" RelOptInfos for the members of any appendrels we find here.)
|
||||
*/
|
||||
void
|
||||
@ -234,10 +234,10 @@ add_vars_to_targetlist(PlannerInfo *root, List *vars,
|
||||
* means setting suitable where_needed values for them.
|
||||
*
|
||||
* Note that this only deals with lateral references in unflattened LATERAL
|
||||
* subqueries. When we flatten a LATERAL subquery, its lateral references
|
||||
* subqueries. When we flatten a LATERAL subquery, its lateral references
|
||||
* become plain Vars in the parent query, but they may have to be wrapped in
|
||||
* PlaceHolderVars if they need to be forced NULL by outer joins that don't
|
||||
* also null the LATERAL subquery. That's all handled elsewhere.
|
||||
* also null the LATERAL subquery. That's all handled elsewhere.
|
||||
*
|
||||
* This has to run before deconstruct_jointree, since it might result in
|
||||
* creation of PlaceHolderInfos.
|
||||
@ -360,7 +360,7 @@ extract_lateral_references(PlannerInfo *root, RelOptInfo *brel, Index rtindex)
|
||||
/*
|
||||
* We mark the Vars as being "needed" at the LATERAL RTE. This is a bit
|
||||
* of a cheat: a more formal approach would be to mark each one as needed
|
||||
* at the join of the LATERAL RTE with its source RTE. But it will work,
|
||||
* at the join of the LATERAL RTE with its source RTE. But it will work,
|
||||
* and it's much less tedious than computing a separate where_needed for
|
||||
* each Var.
|
||||
*/
|
||||
@ -568,7 +568,7 @@ create_lateral_join_info(PlannerInfo *root)
|
||||
* add_lateral_info
|
||||
* Add a LateralJoinInfo to root->lateral_info_list, if needed
|
||||
*
|
||||
* We suppress redundant list entries. The passed Relids are copied if saved.
|
||||
* We suppress redundant list entries. The passed Relids are copied if saved.
|
||||
*/
|
||||
static void
|
||||
add_lateral_info(PlannerInfo *root, Relids lhs, Relids rhs)
|
||||
@ -615,7 +615,7 @@ add_lateral_info(PlannerInfo *root, Relids lhs, Relids rhs)
|
||||
* deconstruct_jointree
|
||||
* Recursively scan the query's join tree for WHERE and JOIN/ON qual
|
||||
* clauses, and add these to the appropriate restrictinfo and joininfo
|
||||
* lists belonging to base RelOptInfos. Also, add SpecialJoinInfo nodes
|
||||
* lists belonging to base RelOptInfos. Also, add SpecialJoinInfo nodes
|
||||
* to root->join_info_list for any outer joins appearing in the query tree.
|
||||
* Return a "joinlist" data structure showing the join order decisions
|
||||
* that need to be made by make_one_rel().
|
||||
@ -632,9 +632,9 @@ add_lateral_info(PlannerInfo *root, Relids lhs, Relids rhs)
|
||||
* be evaluated at the lowest level where all the variables it mentions are
|
||||
* available. However, we cannot push a qual down into the nullable side(s)
|
||||
* of an outer join since the qual might eliminate matching rows and cause a
|
||||
* NULL row to be incorrectly emitted by the join. Therefore, we artificially
|
||||
* NULL row to be incorrectly emitted by the join. Therefore, we artificially
|
||||
* OR the minimum-relids of such an outer join into the required_relids of
|
||||
* clauses appearing above it. This forces those clauses to be delayed until
|
||||
* clauses appearing above it. This forces those clauses to be delayed until
|
||||
* application of the outer join (or maybe even higher in the join tree).
|
||||
*/
|
||||
List *
|
||||
@ -755,7 +755,7 @@ deconstruct_recurse(PlannerInfo *root, Node *jtnode, bool below_outer_join,
|
||||
*inner_join_rels = *qualscope;
|
||||
|
||||
/*
|
||||
* Try to process any quals postponed by children. If they need
|
||||
* Try to process any quals postponed by children. If they need
|
||||
* further postponement, add them to my output postponed_qual_list.
|
||||
*/
|
||||
foreach(l, child_postponed_quals)
|
||||
@ -807,7 +807,7 @@ deconstruct_recurse(PlannerInfo *root, Node *jtnode, bool below_outer_join,
|
||||
* regard for whether this level is an outer join, which is correct.
|
||||
* Then we place our own join quals, which are restricted by lower
|
||||
* outer joins in any case, and are forced to this level if this is an
|
||||
* outer join and they mention the outer side. Finally, if this is an
|
||||
* outer join and they mention the outer side. Finally, if this is an
|
||||
* outer join, we create a join_info_list entry for the join. This
|
||||
* will prevent quals above us in the join tree that use those rels
|
||||
* from being pushed down below this level. (It's okay for upper
|
||||
@ -897,7 +897,7 @@ deconstruct_recurse(PlannerInfo *root, Node *jtnode, bool below_outer_join,
|
||||
nullable_rels);
|
||||
|
||||
/*
|
||||
* Try to process any quals postponed by children. If they need
|
||||
* Try to process any quals postponed by children. If they need
|
||||
* further postponement, add them to my output postponed_qual_list.
|
||||
* Quals that can be processed now must be included in my_quals, so
|
||||
* that they'll be handled properly in make_outerjoininfo.
|
||||
@ -1059,7 +1059,7 @@ make_outerjoininfo(PlannerInfo *root,
|
||||
* complain if any nullable rel is FOR [KEY] UPDATE/SHARE.
|
||||
*
|
||||
* You might be wondering why this test isn't made far upstream in the
|
||||
* parser. It's because the parser hasn't got enough info --- consider
|
||||
* parser. It's because the parser hasn't got enough info --- consider
|
||||
* FOR UPDATE applied to a view. Only after rewriting and flattening do
|
||||
* we know whether the view contains an outer join.
|
||||
*
|
||||
@ -1117,7 +1117,7 @@ make_outerjoininfo(PlannerInfo *root,
|
||||
min_lefthand = bms_intersect(clause_relids, left_rels);
|
||||
|
||||
/*
|
||||
* Similarly for required RHS. But here, we must also include any lower
|
||||
* Similarly for required RHS. But here, we must also include any lower
|
||||
* inner joins, to ensure we don't try to commute with any of them.
|
||||
*/
|
||||
min_righthand = bms_int_members(bms_union(clause_relids, inner_join_rels),
|
||||
@ -1169,7 +1169,7 @@ make_outerjoininfo(PlannerInfo *root,
|
||||
* Here, we have to consider that "our join condition" includes any
|
||||
* clauses that syntactically appeared above the lower OJ and below
|
||||
* ours; those are equivalent to degenerate clauses in our OJ and must
|
||||
* be treated as such. Such clauses obviously can't reference our
|
||||
* be treated as such. Such clauses obviously can't reference our
|
||||
* LHS, and they must be non-strict for the lower OJ's RHS (else
|
||||
* reduce_outer_joins would have reduced the lower OJ to a plain
|
||||
* join). Hence the other ways in which we handle clauses within our
|
||||
@ -1248,7 +1248,7 @@ make_outerjoininfo(PlannerInfo *root,
|
||||
* distribute_qual_to_rels
|
||||
* Add clause information to either the baserestrictinfo or joininfo list
|
||||
* (depending on whether the clause is a join) of each base relation
|
||||
* mentioned in the clause. A RestrictInfo node is created and added to
|
||||
* mentioned in the clause. A RestrictInfo node is created and added to
|
||||
* the appropriate list for each rel. Alternatively, if the clause uses a
|
||||
* mergejoinable operator and is not delayed by outer-join rules, enter
|
||||
* the left- and right-side expressions into the query's list of
|
||||
@ -1313,7 +1313,7 @@ distribute_qual_to_rels(PlannerInfo *root, Node *clause,
|
||||
* In ordinary SQL, a WHERE or JOIN/ON clause can't reference any rels
|
||||
* that aren't within its syntactic scope; however, if we pulled up a
|
||||
* LATERAL subquery then we might find such references in quals that have
|
||||
* been pulled up. We need to treat such quals as belonging to the join
|
||||
* been pulled up. We need to treat such quals as belonging to the join
|
||||
* level that includes every rel they reference. Although we could make
|
||||
* pull_up_subqueries() place such quals correctly to begin with, it's
|
||||
* easier to handle it here. When we find a clause that contains Vars
|
||||
@ -1357,10 +1357,10 @@ distribute_qual_to_rels(PlannerInfo *root, Node *clause,
|
||||
* gating Result plan node. We put such a clause into the regular
|
||||
* RestrictInfo lists for the moment, but eventually createplan.c will
|
||||
* pull it out and make a gating Result node immediately above whatever
|
||||
* plan node the pseudoconstant clause is assigned to. It's usually best
|
||||
* plan node the pseudoconstant clause is assigned to. It's usually best
|
||||
* to put a gating node as high in the plan tree as possible. If we are
|
||||
* not below an outer join, we can actually push the pseudoconstant qual
|
||||
* all the way to the top of the tree. If we are below an outer join, we
|
||||
* all the way to the top of the tree. If we are below an outer join, we
|
||||
* leave the qual at its original syntactic level (we could push it up to
|
||||
* just below the outer join, but that seems more complex than it's
|
||||
* worth).
|
||||
@ -1414,7 +1414,7 @@ distribute_qual_to_rels(PlannerInfo *root, Node *clause,
|
||||
* Note: it is not immediately obvious that a simple boolean is enough
|
||||
* for this: if for some reason we were to attach a degenerate qual to
|
||||
* its original join level, it would need to be treated as an outer join
|
||||
* qual there. However, this cannot happen, because all the rels the
|
||||
* qual there. However, this cannot happen, because all the rels the
|
||||
* clause mentions must be in the outer join's min_righthand, therefore
|
||||
* the join it needs must be formed before the outer join; and we always
|
||||
* attach quals to the lowest level where they can be evaluated. But
|
||||
@ -1448,7 +1448,7 @@ distribute_qual_to_rels(PlannerInfo *root, Node *clause,
|
||||
* We can't use such a clause to deduce equivalence (the left and
|
||||
* right sides might be unequal above the join because one of them has
|
||||
* gone to NULL) ... but we might be able to use it for more limited
|
||||
* deductions, if it is mergejoinable. So consider adding it to the
|
||||
* deductions, if it is mergejoinable. So consider adding it to the
|
||||
* lists of set-aside outer-join clauses.
|
||||
*/
|
||||
is_pushed_down = false;
|
||||
@ -1478,7 +1478,7 @@ distribute_qual_to_rels(PlannerInfo *root, Node *clause,
|
||||
else
|
||||
{
|
||||
/*
|
||||
* Normal qual clause or degenerate outer-join clause. Either way, we
|
||||
* Normal qual clause or degenerate outer-join clause. Either way, we
|
||||
* can mark it as pushed-down.
|
||||
*/
|
||||
is_pushed_down = true;
|
||||
@ -1598,7 +1598,7 @@ distribute_qual_to_rels(PlannerInfo *root, Node *clause,
|
||||
*
|
||||
* In all cases, it's important to initialize the left_ec and right_ec
|
||||
* fields of a mergejoinable clause, so that all possibly mergejoinable
|
||||
* expressions have representations in EquivalenceClasses. If
|
||||
* expressions have representations in EquivalenceClasses. If
|
||||
* process_equivalence is successful, it will take care of that;
|
||||
* otherwise, we have to call initialize_mergeclause_eclasses to do it.
|
||||
*/
|
||||
@ -1674,7 +1674,7 @@ distribute_qual_to_rels(PlannerInfo *root, Node *clause,
|
||||
* For an is_pushed_down qual, we can evaluate the qual as soon as (1) we have
|
||||
* all the rels it mentions, and (2) we are at or above any outer joins that
|
||||
* can null any of these rels and are below the syntactic location of the
|
||||
* given qual. We must enforce (2) because pushing down such a clause below
|
||||
* given qual. We must enforce (2) because pushing down such a clause below
|
||||
* the OJ might cause the OJ to emit null-extended rows that should not have
|
||||
* been formed, or that should have been rejected by the clause. (This is
|
||||
* only an issue for non-strict quals, since if we can prove a qual mentioning
|
||||
@ -1700,7 +1700,7 @@ distribute_qual_to_rels(PlannerInfo *root, Node *clause,
|
||||
* required relids overlap the LHS too) causes that OJ's delay_upper_joins
|
||||
* flag to be set TRUE. This will prevent any higher-level OJs from
|
||||
* being interchanged with that OJ, which would result in not having any
|
||||
* correct place to evaluate the qual. (The case we care about here is a
|
||||
* correct place to evaluate the qual. (The case we care about here is a
|
||||
* sub-select WHERE clause within the RHS of some outer join. The WHERE
|
||||
* clause must effectively be treated as a degenerate clause of that outer
|
||||
* join's condition. Rather than trying to match such clauses with joins
|
||||
@ -1928,7 +1928,7 @@ distribute_restrictinfo_to_rels(PlannerInfo *root,
|
||||
* that provides all its variables.
|
||||
*
|
||||
* "nullable_relids" is the set of relids used in the expressions that are
|
||||
* potentially nullable below the expressions. (This has to be supplied by
|
||||
* potentially nullable below the expressions. (This has to be supplied by
|
||||
* caller because this function is used after deconstruct_jointree, so we
|
||||
* don't have knowledge of where the clause items came from.)
|
||||
*
|
||||
@ -2098,7 +2098,7 @@ check_mergejoinable(RestrictInfo *restrictinfo)
|
||||
* info fields in the restrictinfo.
|
||||
*
|
||||
* Currently, we support hashjoin for binary opclauses where
|
||||
* the operator is a hashjoinable operator. The arguments can be
|
||||
* the operator is a hashjoinable operator. The arguments can be
|
||||
* anything --- as long as there are no volatile functions in them.
|
||||
*/
|
||||
static void
|
||||
|
@ -10,9 +10,9 @@
|
||||
* ORDER BY col ASC/DESC
|
||||
* LIMIT 1)
|
||||
* Given a suitable index on tab.col, this can be much faster than the
|
||||
* generic scan-all-the-rows aggregation plan. We can handle multiple
|
||||
* generic scan-all-the-rows aggregation plan. We can handle multiple
|
||||
* MIN/MAX aggregates by generating multiple subqueries, and their
|
||||
* orderings can be different. However, if the query contains any
|
||||
* orderings can be different. However, if the query contains any
|
||||
* non-optimizable aggregates, there's no point since we'll have to
|
||||
* scan all the rows anyway.
|
||||
*
|
||||
@ -128,7 +128,7 @@ preprocess_minmax_aggregates(PlannerInfo *root, List *tlist)
|
||||
|
||||
/*
|
||||
* Scan the tlist and HAVING qual to find all the aggregates and verify
|
||||
* all are MIN/MAX aggregates. Stop as soon as we find one that isn't.
|
||||
* all are MIN/MAX aggregates. Stop as soon as we find one that isn't.
|
||||
*/
|
||||
aggs_list = NIL;
|
||||
if (find_minmax_aggs_walker((Node *) tlist, &aggs_list))
|
||||
@ -163,7 +163,7 @@ preprocess_minmax_aggregates(PlannerInfo *root, List *tlist)
|
||||
* We can use either an ordering that gives NULLS FIRST or one that
|
||||
* gives NULLS LAST; furthermore there's unlikely to be much
|
||||
* performance difference between them, so it doesn't seem worth
|
||||
* costing out both ways if we get a hit on the first one. NULLS
|
||||
* costing out both ways if we get a hit on the first one. NULLS
|
||||
* FIRST is more likely to be available if the operator is a
|
||||
* reverse-sort operator, so try that first if reverse.
|
||||
*/
|
||||
|
@ -36,7 +36,7 @@
|
||||
* which may involve joins but not any fancier features.
|
||||
*
|
||||
* Since query_planner does not handle the toplevel processing (grouping,
|
||||
* sorting, etc) it cannot select the best path by itself. It selects
|
||||
* sorting, etc) it cannot select the best path by itself. It selects
|
||||
* two paths: the cheapest path that produces all the required tuples,
|
||||
* independent of any ordering considerations, and the cheapest path that
|
||||
* produces the expected fraction of the required tuples in the required
|
||||
@ -100,7 +100,7 @@ query_planner(PlannerInfo *root, List *tlist,
|
||||
|
||||
/*
|
||||
* If the query has an empty join tree, then it's something easy like
|
||||
* "SELECT 2+2;" or "INSERT ... VALUES()". Fall through quickly.
|
||||
* "SELECT 2+2;" or "INSERT ... VALUES()". Fall through quickly.
|
||||
*/
|
||||
if (parse->jointree->fromlist == NIL)
|
||||
{
|
||||
@ -160,7 +160,7 @@ query_planner(PlannerInfo *root, List *tlist,
|
||||
/*
|
||||
* Examine the targetlist and join tree, adding entries to baserel
|
||||
* targetlists for all referenced Vars, and generating PlaceHolderInfo
|
||||
* entries for all referenced PlaceHolderVars. Restrict and join clauses
|
||||
* entries for all referenced PlaceHolderVars. Restrict and join clauses
|
||||
* are added to appropriate lists belonging to the mentioned relations. We
|
||||
* also build EquivalenceClasses for provably equivalent expressions. The
|
||||
* SpecialJoinInfo list is also built to hold information about join order
|
||||
@ -184,7 +184,7 @@ query_planner(PlannerInfo *root, List *tlist,
|
||||
|
||||
/*
|
||||
* If we formed any equivalence classes, generate additional restriction
|
||||
* clauses as appropriate. (Implied join clauses are formed on-the-fly
|
||||
* clauses as appropriate. (Implied join clauses are formed on-the-fly
|
||||
* later.)
|
||||
*/
|
||||
generate_base_implied_equalities(root);
|
||||
@ -199,14 +199,14 @@ query_planner(PlannerInfo *root, List *tlist,
|
||||
/*
|
||||
* Examine any "placeholder" expressions generated during subquery pullup.
|
||||
* Make sure that the Vars they need are marked as needed at the relevant
|
||||
* join level. This must be done before join removal because it might
|
||||
* join level. This must be done before join removal because it might
|
||||
* cause Vars or placeholders to be needed above a join when they weren't
|
||||
* so marked before.
|
||||
*/
|
||||
fix_placeholder_input_needed_levels(root);
|
||||
|
||||
/*
|
||||
* Remove any useless outer joins. Ideally this would be done during
|
||||
* Remove any useless outer joins. Ideally this would be done during
|
||||
* jointree preprocessing, but the necessary information isn't available
|
||||
* until we've built baserel data structures and classified qual clauses.
|
||||
*/
|
||||
@ -299,7 +299,7 @@ query_planner(PlannerInfo *root, List *tlist,
|
||||
/*
|
||||
* If both GROUP BY and ORDER BY are specified, we will need two
|
||||
* levels of sort --- and, therefore, certainly need to read all the
|
||||
* tuples --- unless ORDER BY is a subset of GROUP BY. Likewise if we
|
||||
* tuples --- unless ORDER BY is a subset of GROUP BY. Likewise if we
|
||||
* have both DISTINCT and GROUP BY, or if we have a window
|
||||
* specification not compatible with the GROUP BY.
|
||||
*/
|
||||
|
@ -191,7 +191,7 @@ standard_planner(Query *parse, int cursorOptions, ParamListInfo boundParams)
|
||||
|
||||
/*
|
||||
* We document cursor_tuple_fraction as simply being a fraction, which
|
||||
* means the edge cases 0 and 1 have to be treated specially here. We
|
||||
* means the edge cases 0 and 1 have to be treated specially here. We
|
||||
* convert 1 to 0 ("all the tuples") and 0 to a very small fraction.
|
||||
*/
|
||||
if (tuple_fraction >= 1.0)
|
||||
@ -384,7 +384,7 @@ subquery_planner(PlannerGlobal *glob, Query *parse,
|
||||
}
|
||||
|
||||
/*
|
||||
* Preprocess RowMark information. We need to do this after subquery
|
||||
* Preprocess RowMark information. We need to do this after subquery
|
||||
* pullup (so that all non-inherited RTEs are present) and before
|
||||
* inheritance expansion (so that the info is available for
|
||||
* expand_inherited_tables to examine and modify).
|
||||
@ -492,7 +492,7 @@ subquery_planner(PlannerGlobal *glob, Query *parse,
|
||||
* to execute that we're better off doing it only once per group, despite
|
||||
* the loss of selectivity. This is hard to estimate short of doing the
|
||||
* entire planning process twice, so we use a heuristic: clauses
|
||||
* containing subplans are left in HAVING. Otherwise, we move or copy the
|
||||
* containing subplans are left in HAVING. Otherwise, we move or copy the
|
||||
* HAVING clause into WHERE, in hopes of eliminating tuples before
|
||||
* aggregation instead of after.
|
||||
*
|
||||
@ -1044,7 +1044,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
|
||||
|
||||
/*
|
||||
* If there's a top-level ORDER BY, assume we have to fetch all the
|
||||
* tuples. This might be too simplistic given all the hackery below
|
||||
* tuples. This might be too simplistic given all the hackery below
|
||||
* to possibly avoid the sort; but the odds of accurate estimates here
|
||||
* are pretty low anyway.
|
||||
*/
|
||||
@ -1071,7 +1071,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
|
||||
|
||||
/*
|
||||
* We should not need to call preprocess_targetlist, since we must be
|
||||
* in a SELECT query node. Instead, use the targetlist returned by
|
||||
* in a SELECT query node. Instead, use the targetlist returned by
|
||||
* plan_set_operations (since this tells whether it returned any
|
||||
* resjunk columns!), and transfer any sort key information from the
|
||||
* original tlist.
|
||||
@ -1467,7 +1467,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
|
||||
* Furthermore, there cannot be any variables in either HAVING
|
||||
* or the targetlist, so we actually do not need the FROM
|
||||
* table at all! We can just throw away the plan-so-far and
|
||||
* generate a Result node. This is a sufficiently unusual
|
||||
* generate a Result node. This is a sufficiently unusual
|
||||
* corner case that it's not worth contorting the structure of
|
||||
* this routine to avoid having to generate the plan in the
|
||||
* first place.
|
||||
@ -1511,14 +1511,14 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
|
||||
|
||||
/*
|
||||
* The "base" targetlist for all steps of the windowing process is
|
||||
* a flat tlist of all Vars and Aggs needed in the result. (In
|
||||
* a flat tlist of all Vars and Aggs needed in the result. (In
|
||||
* some cases we wouldn't need to propagate all of these all the
|
||||
* way to the top, since they might only be needed as inputs to
|
||||
* WindowFuncs. It's probably not worth trying to optimize that
|
||||
* though.) We also add window partitioning and sorting
|
||||
* expressions to the base tlist, to ensure they're computed only
|
||||
* once at the bottom of the stack (that's critical for volatile
|
||||
* functions). As we climb up the stack, we'll add outputs for
|
||||
* functions). As we climb up the stack, we'll add outputs for
|
||||
* the WindowFuncs computed at each level.
|
||||
*/
|
||||
window_tlist = make_windowInputTargetList(root,
|
||||
@ -1527,7 +1527,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
|
||||
|
||||
/*
|
||||
* The copyObject steps here are needed to ensure that each plan
|
||||
* node has a separately modifiable tlist. (XXX wouldn't a
|
||||
* node has a separately modifiable tlist. (XXX wouldn't a
|
||||
* shallow list copy do for that?)
|
||||
*/
|
||||
result_plan->targetlist = (List *) copyObject(window_tlist);
|
||||
@ -1812,7 +1812,7 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
|
||||
*
|
||||
* Once grouping_planner() has applied a general tlist to the topmost
|
||||
* scan/join plan node, any tlist eval cost for added-on nodes should be
|
||||
* accounted for as we create those nodes. Presently, of the node types we
|
||||
* accounted for as we create those nodes. Presently, of the node types we
|
||||
* can add on later, only Agg, WindowAgg, and Group project new tlists (the
|
||||
* rest just copy their input tuples) --- so make_agg(), make_windowagg() and
|
||||
* make_group() are responsible for calling this function to account for their
|
||||
@ -1978,7 +1978,7 @@ preprocess_rowmarks(PlannerInfo *root)
|
||||
|
||||
/*
|
||||
* Currently, it is syntactically impossible to have FOR UPDATE et al
|
||||
* applied to an update/delete target rel. If that ever becomes
|
||||
* applied to an update/delete target rel. If that ever becomes
|
||||
* possible, we should drop the target from the PlanRowMark list.
|
||||
*/
|
||||
Assert(rc->rti != parse->resultRelation);
|
||||
@ -2062,7 +2062,7 @@ preprocess_rowmarks(PlannerInfo *root)
|
||||
* preprocess_limit - do pre-estimation for LIMIT and/or OFFSET clauses
|
||||
*
|
||||
* We try to estimate the values of the LIMIT/OFFSET clauses, and pass the
|
||||
* results back in *count_est and *offset_est. These variables are set to
|
||||
* results back in *count_est and *offset_est. These variables are set to
|
||||
* 0 if the corresponding clause is not present, and -1 if it's present
|
||||
* but we couldn't estimate the value for it. (The "0" convention is OK
|
||||
* for OFFSET but a little bit bogus for LIMIT: effectively we estimate
|
||||
@ -2071,7 +2071,7 @@ preprocess_rowmarks(PlannerInfo *root)
|
||||
* be passed to make_limit, which see if you change this code.
|
||||
*
|
||||
* The return value is the suitably adjusted tuple_fraction to use for
|
||||
* planning the query. This adjustment is not overridable, since it reflects
|
||||
* planning the query. This adjustment is not overridable, since it reflects
|
||||
* plan actions that grouping_planner() will certainly take, not assumptions
|
||||
* about context.
|
||||
*/
|
||||
@ -2195,7 +2195,7 @@ preprocess_limit(PlannerInfo *root, double tuple_fraction,
|
||||
else if (*offset_est != 0 && tuple_fraction > 0.0)
|
||||
{
|
||||
/*
|
||||
* We have an OFFSET but no LIMIT. This acts entirely differently
|
||||
* We have an OFFSET but no LIMIT. This acts entirely differently
|
||||
* from the LIMIT case: here, we need to increase rather than decrease
|
||||
* the caller's tuple_fraction, because the OFFSET acts to cause more
|
||||
* tuples to be fetched instead of fewer. This only matters if we got
|
||||
@ -2210,7 +2210,7 @@ preprocess_limit(PlannerInfo *root, double tuple_fraction,
|
||||
|
||||
/*
|
||||
* If we have absolute counts from both caller and OFFSET, add them
|
||||
* together; likewise if they are both fractional. If one is
|
||||
* together; likewise if they are both fractional. If one is
|
||||
* fractional and the other absolute, we want to take the larger, and
|
||||
* we heuristically assume that's the fractional one.
|
||||
*/
|
||||
@ -2251,7 +2251,7 @@ preprocess_limit(PlannerInfo *root, double tuple_fraction,
|
||||
*
|
||||
* If we have constant-zero OFFSET and constant-null LIMIT, we can skip adding
|
||||
* a Limit node. This is worth checking for because "OFFSET 0" is a common
|
||||
* locution for an optimization fence. (Because other places in the planner
|
||||
* locution for an optimization fence. (Because other places in the planner
|
||||
* merely check whether parse->limitOffset isn't NULL, it will still work as
|
||||
* an optimization fence --- we're just suppressing unnecessary run-time
|
||||
* overhead.)
|
||||
@ -2494,7 +2494,7 @@ choose_hashed_grouping(PlannerInfo *root,
|
||||
|
||||
/*
|
||||
* Executor doesn't support hashed aggregation with DISTINCT or ORDER BY
|
||||
* aggregates. (Doing so would imply storing *all* the input values in
|
||||
* aggregates. (Doing so would imply storing *all* the input values in
|
||||
* the hash table, and/or running many sorts in parallel, either of which
|
||||
* seems like a certain loser.)
|
||||
*/
|
||||
@ -2636,7 +2636,7 @@ choose_hashed_grouping(PlannerInfo *root,
|
||||
* pass in the costs as individual variables.)
|
||||
*
|
||||
* But note that making the two choices independently is a bit bogus in
|
||||
* itself. If the two could be combined into a single choice operation
|
||||
* itself. If the two could be combined into a single choice operation
|
||||
* it'd probably be better, but that seems far too unwieldy to be practical,
|
||||
* especially considering that the combination of GROUP BY and DISTINCT
|
||||
* isn't very common in real queries. By separating them, we are giving
|
||||
@ -2733,7 +2733,7 @@ choose_hashed_distinct(PlannerInfo *root,
|
||||
0.0, work_mem, limit_tuples);
|
||||
|
||||
/*
|
||||
* Now for the GROUP case. See comments in grouping_planner about the
|
||||
* Now for the GROUP case. See comments in grouping_planner about the
|
||||
* sorting choices here --- this code should match that code.
|
||||
*/
|
||||
sorted_p.startup_cost = sorted_startup_cost;
|
||||
@ -2927,7 +2927,7 @@ make_subplanTargetList(PlannerInfo *root,
|
||||
* add them to the result tlist if not already present. (A Var used
|
||||
* directly as a GROUP BY item will be present already.) Note this
|
||||
* includes Vars used in resjunk items, so we are covering the needs of
|
||||
* ORDER BY and window specifications. Vars used within Aggrefs will be
|
||||
* ORDER BY and window specifications. Vars used within Aggrefs will be
|
||||
* pulled out here, too.
|
||||
*/
|
||||
non_group_vars = pull_var_clause((Node *) non_group_cols,
|
||||
@ -2978,7 +2978,7 @@ get_grouping_column_index(Query *parse, TargetEntry *tle)
|
||||
* Locate grouping columns in the tlist chosen by create_plan.
|
||||
*
|
||||
* This is only needed if we don't use the sub_tlist chosen by
|
||||
* make_subplanTargetList. We have to forget the column indexes found
|
||||
* make_subplanTargetList. We have to forget the column indexes found
|
||||
* by that routine and re-locate the grouping exprs in the real sub_tlist.
|
||||
* We assume the grouping exprs are just Vars (see make_subplanTargetList).
|
||||
*/
|
||||
@ -3009,11 +3009,11 @@ locate_grouping_columns(PlannerInfo *root,
|
||||
|
||||
/*
|
||||
* The grouping column returned by create_plan might not have the same
|
||||
* typmod as the original Var. (This can happen in cases where a
|
||||
* typmod as the original Var. (This can happen in cases where a
|
||||
* set-returning function has been inlined, so that we now have more
|
||||
* knowledge about what it returns than we did when the original Var
|
||||
* was created.) So we can't use tlist_member() to search the tlist;
|
||||
* instead use tlist_member_match_var. For safety, still check that
|
||||
* instead use tlist_member_match_var. For safety, still check that
|
||||
* the vartype matches.
|
||||
*/
|
||||
if (!(groupexpr && IsA(groupexpr, Var)))
|
||||
@ -3139,7 +3139,7 @@ select_active_windows(PlannerInfo *root, WindowFuncLists *wflists)
|
||||
*
|
||||
* When grouping_planner inserts one or more WindowAgg nodes into the plan,
|
||||
* this function computes the initial target list to be computed by the node
|
||||
* just below the first WindowAgg. This list must contain all values needed
|
||||
* just below the first WindowAgg. This list must contain all values needed
|
||||
* to evaluate the window functions, compute the final target list, and
|
||||
* perform any required final sort step. If multiple WindowAggs are needed,
|
||||
* each intermediate one adds its window function results onto this tlist;
|
||||
@ -3147,7 +3147,7 @@ select_active_windows(PlannerInfo *root, WindowFuncLists *wflists)
|
||||
*
|
||||
* This function is much like make_subplanTargetList, though not quite enough
|
||||
* like it to share code. As in that function, we flatten most expressions
|
||||
* into their component variables. But we do not want to flatten window
|
||||
* into their component variables. But we do not want to flatten window
|
||||
* PARTITION BY/ORDER BY clauses, since that might result in multiple
|
||||
* evaluations of them, which would be bad (possibly even resulting in
|
||||
* inconsistent answers, if they contain volatile functions). Also, we must
|
||||
@ -3320,7 +3320,7 @@ make_pathkeys_for_window(PlannerInfo *root, WindowClause *wc,
|
||||
* This depends on the behavior of make_pathkeys_for_window()!
|
||||
*
|
||||
* We are given the target WindowClause and an array of the input column
|
||||
* numbers associated with the resulting pathkeys. In the easy case, there
|
||||
* numbers associated with the resulting pathkeys. In the easy case, there
|
||||
* are the same number of pathkey columns as partitioning + ordering columns
|
||||
* and we just have to copy some data around. However, it's possible that
|
||||
* some of the original partitioning + ordering columns were eliminated as
|
||||
@ -3332,7 +3332,7 @@ make_pathkeys_for_window(PlannerInfo *root, WindowClause *wc,
|
||||
* determine which keys are significant.
|
||||
*
|
||||
* The method used here is a bit brute-force: add the sort columns to a list
|
||||
* one at a time and note when the resulting pathkey list gets longer. But
|
||||
* one at a time and note when the resulting pathkey list gets longer. But
|
||||
* it's a sufficiently uncommon case that a faster way doesn't seem worth
|
||||
* the amount of code refactoring that'd be needed.
|
||||
*----------
|
||||
|
@ -145,7 +145,7 @@ static bool extract_query_dependencies_walker(Node *node,
|
||||
/*
|
||||
* set_plan_references
|
||||
*
|
||||
* This is the final processing pass of the planner/optimizer. The plan
|
||||
* This is the final processing pass of the planner/optimizer. The plan
|
||||
* tree is complete; we just have to adjust some representational details
|
||||
* for the convenience of the executor:
|
||||
*
|
||||
@ -189,7 +189,7 @@ static bool extract_query_dependencies_walker(Node *node,
|
||||
* and root->glob->invalItems (for everything else).
|
||||
*
|
||||
* Notice that we modify Plan nodes in-place, but use expression_tree_mutator
|
||||
* to process targetlist and qual expressions. We can assume that the Plan
|
||||
* to process targetlist and qual expressions. We can assume that the Plan
|
||||
* nodes were just built by the planner and are not multiply referenced, but
|
||||
* it's not so safe to assume that for expression tree nodes.
|
||||
*/
|
||||
@ -262,7 +262,7 @@ add_rtes_to_flat_rtable(PlannerInfo *root, bool recursing)
|
||||
/*
|
||||
* If there are any dead subqueries, they are not referenced in the Plan
|
||||
* tree, so we must add RTEs contained in them to the flattened rtable
|
||||
* separately. (If we failed to do this, the executor would not perform
|
||||
* separately. (If we failed to do this, the executor would not perform
|
||||
* expected permission checks for tables mentioned in such subqueries.)
|
||||
*
|
||||
* Note: this pass over the rangetable can't be combined with the previous
|
||||
@ -292,7 +292,7 @@ add_rtes_to_flat_rtable(PlannerInfo *root, bool recursing)
|
||||
/*
|
||||
* The subquery might never have been planned at all, if it
|
||||
* was excluded on the basis of self-contradictory constraints
|
||||
* in our query level. In this case apply
|
||||
* in our query level. In this case apply
|
||||
* flatten_unplanned_rtes.
|
||||
*
|
||||
* If it was planned but the plan is dummy, we assume that it
|
||||
@ -594,7 +594,7 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
|
||||
/*
|
||||
* These plan types don't actually bother to evaluate their
|
||||
* targetlists, because they just return their unmodified input
|
||||
* tuples. Even though the targetlist won't be used by the
|
||||
* tuples. Even though the targetlist won't be used by the
|
||||
* executor, we fix it up for possible use by EXPLAIN (not to
|
||||
* mention ease of debugging --- wrong varnos are very confusing).
|
||||
*/
|
||||
@ -612,7 +612,7 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
|
||||
|
||||
/*
|
||||
* Like the plan types above, LockRows doesn't evaluate its
|
||||
* tlist or quals. But we have to fix up the RT indexes in
|
||||
* tlist or quals. But we have to fix up the RT indexes in
|
||||
* its rowmarks.
|
||||
*/
|
||||
set_dummy_tlist_references(plan, rtoffset);
|
||||
@ -730,7 +730,7 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
|
||||
* Set up the visible plan targetlist as being the same as
|
||||
* the first RETURNING list. This is for the use of
|
||||
* EXPLAIN; the executor won't pay any attention to the
|
||||
* targetlist. We postpone this step until here so that
|
||||
* targetlist. We postpone this step until here so that
|
||||
* we don't have to do set_returning_clause_references()
|
||||
* twice on identical targetlists.
|
||||
*/
|
||||
@ -956,7 +956,7 @@ set_subqueryscan_references(PlannerInfo *root,
|
||||
else
|
||||
{
|
||||
/*
|
||||
* Keep the SubqueryScan node. We have to do the processing that
|
||||
* Keep the SubqueryScan node. We have to do the processing that
|
||||
* set_plan_references would otherwise have done on it. Notice we do
|
||||
* not do set_upper_references() here, because a SubqueryScan will
|
||||
* always have been created with correct references to its subplan's
|
||||
@ -1428,7 +1428,7 @@ set_dummy_tlist_references(Plan *plan, int rtoffset)
|
||||
*
|
||||
* In most cases, subplan tlists will be "flat" tlists with only Vars,
|
||||
* so we try to optimize that case by extracting information about Vars
|
||||
* in advance. Matching a parent tlist to a child is still an O(N^2)
|
||||
* in advance. Matching a parent tlist to a child is still an O(N^2)
|
||||
* operation, but at least with a much smaller constant factor than plain
|
||||
* tlist_member() searches.
|
||||
*
|
||||
@ -1873,7 +1873,7 @@ fix_upper_expr_mutator(Node *node, fix_upper_expr_context *context)
|
||||
* adjust any Vars that refer to other tables to reference junk tlist
|
||||
* entries in the top subplan's targetlist. Vars referencing the result
|
||||
* table should be left alone, however (the executor will evaluate them
|
||||
* using the actual heap tuple, after firing triggers if any). In the
|
||||
* using the actual heap tuple, after firing triggers if any). In the
|
||||
* adjusted RETURNING list, result-table Vars will have their original
|
||||
* varno (plus rtoffset), but Vars for other rels will have varno OUTER_VAR.
|
||||
*
|
||||
|
@ -434,7 +434,7 @@ make_subplan(PlannerInfo *root, Query *orig_subquery, SubLinkType subLinkType,
|
||||
Node *result;
|
||||
|
||||
/*
|
||||
* Copy the source Query node. This is a quick and dirty kluge to resolve
|
||||
* Copy the source Query node. This is a quick and dirty kluge to resolve
|
||||
* the fact that the parser can generate trees with multiple links to the
|
||||
* same sub-Query node, but the planner wants to scribble on the Query.
|
||||
* Try to clean this up when we do querytree redesign...
|
||||
@ -459,7 +459,7 @@ make_subplan(PlannerInfo *root, Query *orig_subquery, SubLinkType subLinkType,
|
||||
* path/costsize.c.
|
||||
*
|
||||
* XXX If an ANY subplan is uncorrelated, build_subplan may decide to hash
|
||||
* its output. In that case it would've been better to specify full
|
||||
* its output. In that case it would've been better to specify full
|
||||
* retrieval. At present, however, we can only check hashability after
|
||||
* we've made the subplan :-(. (Determining whether it'll fit in work_mem
|
||||
* is the really hard part.) Therefore, we don't want to be too
|
||||
@ -496,7 +496,7 @@ make_subplan(PlannerInfo *root, Query *orig_subquery, SubLinkType subLinkType,
|
||||
/*
|
||||
* If it's a correlated EXISTS with an unimportant targetlist, we might be
|
||||
* able to transform it to the equivalent of an IN and then implement it
|
||||
* by hashing. We don't have enough information yet to tell which way is
|
||||
* by hashing. We don't have enough information yet to tell which way is
|
||||
* likely to be better (it depends on the expected number of executions of
|
||||
* the EXISTS qual, and we are much too early in planning the outer query
|
||||
* to be able to guess that). So we generate both plans, if possible, and
|
||||
@ -724,7 +724,7 @@ build_subplan(PlannerInfo *root, Plan *plan, PlannerInfo *subroot,
|
||||
* Otherwise, we have the option to tack a Material node onto the top
|
||||
* of the subplan, to reduce the cost of reading it repeatedly. This
|
||||
* is pointless for a direct-correlated subplan, since we'd have to
|
||||
* recompute its results each time anyway. For uncorrelated/undirect
|
||||
* recompute its results each time anyway. For uncorrelated/undirect
|
||||
* correlated subplans, we add Material unless the subplan's top plan
|
||||
* node would materialize its output anyway. Also, if enable_material
|
||||
* is false, then the user does not want us to materialize anything
|
||||
@ -750,10 +750,10 @@ build_subplan(PlannerInfo *root, Plan *plan, PlannerInfo *subroot,
|
||||
|
||||
/*
|
||||
* A parameterless subplan (not initplan) should be prepared to handle
|
||||
* REWIND efficiently. If it has direct parameters then there's no point
|
||||
* REWIND efficiently. If it has direct parameters then there's no point
|
||||
* since it'll be reset on each scan anyway; and if it's an initplan then
|
||||
* there's no point since it won't get re-run without parameter changes
|
||||
* anyway. The input of a hashed subplan doesn't need REWIND either.
|
||||
* anyway. The input of a hashed subplan doesn't need REWIND either.
|
||||
*/
|
||||
if (splan->parParam == NIL && !isInitPlan && !splan->useHashTable)
|
||||
root->glob->rewindPlanIDs = bms_add_member(root->glob->rewindPlanIDs,
|
||||
@ -853,7 +853,7 @@ generate_subquery_vars(PlannerInfo *root, List *tlist, Index varno)
|
||||
/*
|
||||
* convert_testexpr: convert the testexpr given by the parser into
|
||||
* actually executable form. This entails replacing PARAM_SUBLINK Params
|
||||
* with Params or Vars representing the results of the sub-select. The
|
||||
* with Params or Vars representing the results of the sub-select. The
|
||||
* nodes to be substituted are passed in as the List result from
|
||||
* generate_subquery_params or generate_subquery_vars.
|
||||
*/
|
||||
@ -955,7 +955,7 @@ testexpr_is_hashable(Node *testexpr)
|
||||
*
|
||||
* The combining operators must be hashable and strict. The need for
|
||||
* hashability is obvious, since we want to use hashing. Without
|
||||
* strictness, behavior in the presence of nulls is too unpredictable. We
|
||||
* strictness, behavior in the presence of nulls is too unpredictable. We
|
||||
* actually must assume even more than plain strictness: they can't yield
|
||||
* NULL for non-null inputs, either (see nodeSubplan.c). However, hash
|
||||
* indexes and hash joins assume that too.
|
||||
@ -1063,7 +1063,7 @@ SS_process_ctes(PlannerInfo *root)
|
||||
}
|
||||
|
||||
/*
|
||||
* Copy the source Query node. Probably not necessary, but let's keep
|
||||
* Copy the source Query node. Probably not necessary, but let's keep
|
||||
* this similar to make_subplan.
|
||||
*/
|
||||
subquery = (Query *) copyObject(cte->ctequery);
|
||||
@ -1089,7 +1089,7 @@ SS_process_ctes(PlannerInfo *root)
|
||||
elog(ERROR, "unexpected outer reference in CTE query");
|
||||
|
||||
/*
|
||||
* Make a SubPlan node for it. This is just enough unlike
|
||||
* Make a SubPlan node for it. This is just enough unlike
|
||||
* build_subplan that we can't share code.
|
||||
*
|
||||
* Note plan_id, plan_name, and cost fields are set further down.
|
||||
@ -1313,7 +1313,7 @@ convert_EXISTS_sublink_to_join(PlannerInfo *root, SubLink *sublink,
|
||||
|
||||
/*
|
||||
* See if the subquery can be simplified based on the knowledge that it's
|
||||
* being used in EXISTS(). If we aren't able to get rid of its
|
||||
* being used in EXISTS(). If we aren't able to get rid of its
|
||||
* targetlist, we have to fail, because the pullup operation leaves us
|
||||
* with noplace to evaluate the targetlist.
|
||||
*/
|
||||
@ -1362,9 +1362,9 @@ convert_EXISTS_sublink_to_join(PlannerInfo *root, SubLink *sublink,
|
||||
* what pull_up_subqueries has to go through.
|
||||
*
|
||||
* In fact, it's even easier than what convert_ANY_sublink_to_join has to
|
||||
* do. The machinations of simplify_EXISTS_query ensured that there is
|
||||
* do. The machinations of simplify_EXISTS_query ensured that there is
|
||||
* nothing interesting in the subquery except an rtable and jointree, and
|
||||
* even the jointree FromExpr no longer has quals. So we can just append
|
||||
* even the jointree FromExpr no longer has quals. So we can just append
|
||||
* the rtable to our own and use the FromExpr in our jointree. But first,
|
||||
* adjust all level-zero varnos in the subquery to account for the rtable
|
||||
* merger.
|
||||
@ -1495,7 +1495,7 @@ simplify_EXISTS_query(Query *query)
|
||||
*
|
||||
* On success, the modified subselect is returned, and we store a suitable
|
||||
* upper-level test expression at *testexpr, plus a list of the subselect's
|
||||
* output Params at *paramIds. (The test expression is already Param-ified
|
||||
* output Params at *paramIds. (The test expression is already Param-ified
|
||||
* and hence need not go through convert_testexpr, which is why we have to
|
||||
* deal with the Param IDs specially.)
|
||||
*
|
||||
@ -1658,7 +1658,7 @@ convert_EXISTS_to_ANY(PlannerInfo *root, Query *subselect,
|
||||
return NULL;
|
||||
|
||||
/*
|
||||
* Also reject sublinks in the stuff we intend to pull up. (It might be
|
||||
* Also reject sublinks in the stuff we intend to pull up. (It might be
|
||||
* possible to support this, but doesn't seem worth the complication.)
|
||||
*/
|
||||
if (contain_subplans((Node *) leftargs))
|
||||
@ -1860,7 +1860,7 @@ process_sublinks_mutator(Node *node, process_sublinks_context *context)
|
||||
* is needed for a bare List.)
|
||||
*
|
||||
* Anywhere within the top-level AND/OR clause structure, we can tell
|
||||
* make_subplan() that NULL and FALSE are interchangeable. So isTopQual
|
||||
* make_subplan() that NULL and FALSE are interchangeable. So isTopQual
|
||||
* propagates down in both cases. (Note that this is unlike the meaning
|
||||
* of "top level qual" used in most other places in Postgres.)
|
||||
*/
|
||||
@ -1966,7 +1966,7 @@ SS_finalize_plan(PlannerInfo *root, Plan *plan, bool attach_initplans)
|
||||
* Now determine the set of params that are validly referenceable in this
|
||||
* query level; to wit, those available from outer query levels plus the
|
||||
* output parameters of any local initPlans. (We do not include output
|
||||
* parameters of regular subplans. Those should only appear within the
|
||||
* parameters of regular subplans. Those should only appear within the
|
||||
* testexpr of SubPlan nodes, and are taken care of locally within
|
||||
* finalize_primnode. Likewise, special parameters that are generated by
|
||||
* nodes such as ModifyTable are handled within finalize_plan.)
|
||||
@ -2142,7 +2142,7 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
|
||||
/*
|
||||
* In a SubqueryScan, SS_finalize_plan has already been run on the
|
||||
* subplan by the inner invocation of subquery_planner, so there's
|
||||
* no need to do it again. Instead, just pull out the subplan's
|
||||
* no need to do it again. Instead, just pull out the subplan's
|
||||
* extParams list, which represents the params it needs from my
|
||||
* level and higher levels.
|
||||
*/
|
||||
@ -2476,7 +2476,7 @@ finalize_primnode(Node *node, finalize_primnode_context *context)
|
||||
|
||||
/*
|
||||
* Remove any param IDs of output parameters of the subplan that were
|
||||
* referenced in the testexpr. These are not interesting for
|
||||
* referenced in the testexpr. These are not interesting for
|
||||
* parameter change signaling since we always re-evaluate the subplan.
|
||||
* Note that this wouldn't work too well if there might be uses of the
|
||||
* same param IDs elsewhere in the plan, but that can't happen because
|
||||
|
@ -116,7 +116,7 @@ static Node *find_jointree_node_for_rel(Node *jtnode, int relid);
|
||||
*
|
||||
* A clause "foo op ANY (sub-SELECT)" can be processed by pulling the
|
||||
* sub-SELECT up to become a rangetable entry and treating the implied
|
||||
* comparisons as quals of a semijoin. However, this optimization *only*
|
||||
* comparisons as quals of a semijoin. However, this optimization *only*
|
||||
* works at the top level of WHERE or a JOIN/ON clause, because we cannot
|
||||
* distinguish whether the ANY ought to return FALSE or NULL in cases
|
||||
* involving NULL inputs. Also, in an outer join's ON clause we can only
|
||||
@ -133,7 +133,7 @@ static Node *find_jointree_node_for_rel(Node *jtnode, int relid);
|
||||
* transformations if any are found.
|
||||
*
|
||||
* This routine has to run before preprocess_expression(), so the quals
|
||||
* clauses are not yet reduced to implicit-AND format. That means we need
|
||||
* clauses are not yet reduced to implicit-AND format. That means we need
|
||||
* to recursively search through explicit AND clauses, which are
|
||||
* probably only binary ANDs. We stop as soon as we hit a non-AND item.
|
||||
*/
|
||||
@ -287,7 +287,7 @@ pull_up_sublinks_jointree_recurse(PlannerInfo *root, Node *jtnode,
|
||||
/*
|
||||
* Although we could include the pulled-up subqueries in the returned
|
||||
* relids, there's no need since upper quals couldn't refer to their
|
||||
* outputs anyway. But we *do* need to include the join's own rtindex
|
||||
* outputs anyway. But we *do* need to include the join's own rtindex
|
||||
* because we haven't yet collapsed join alias variables, so upper
|
||||
* levels would mistakenly think they couldn't use references to this
|
||||
* join.
|
||||
@ -612,7 +612,7 @@ pull_up_subqueries(PlannerInfo *root, Node *jtnode)
|
||||
*
|
||||
* If this jointree node is within either side of an outer join, then
|
||||
* lowest_outer_join references the lowest such JoinExpr node; otherwise
|
||||
* it is NULL. We use this to constrain the effects of LATERAL subqueries.
|
||||
* it is NULL. We use this to constrain the effects of LATERAL subqueries.
|
||||
*
|
||||
* If this jointree node is within the nullable side of an outer join, then
|
||||
* lowest_nulling_outer_join references the lowest such JoinExpr node;
|
||||
@ -762,7 +762,7 @@ pull_up_subqueries_recurse(PlannerInfo *root, Node *jtnode,
|
||||
* Attempt to pull up a single simple subquery.
|
||||
*
|
||||
* jtnode is a RangeTblRef that has been tentatively identified as a simple
|
||||
* subquery by pull_up_subqueries. We return the replacement jointree node,
|
||||
* subquery by pull_up_subqueries. We return the replacement jointree node,
|
||||
* or jtnode itself if we determine that the subquery can't be pulled up after
|
||||
* all.
|
||||
*
|
||||
@ -795,7 +795,7 @@ pull_up_simple_subquery(PlannerInfo *root, Node *jtnode, RangeTblEntry *rte,
|
||||
* Create a PlannerInfo data structure for this subquery.
|
||||
*
|
||||
* NOTE: the next few steps should match the first processing in
|
||||
* subquery_planner(). Can we refactor to avoid code duplication, or
|
||||
* subquery_planner(). Can we refactor to avoid code duplication, or
|
||||
* would that just make things uglier?
|
||||
*/
|
||||
subroot = makeNode(PlannerInfo);
|
||||
@ -845,7 +845,7 @@ pull_up_simple_subquery(PlannerInfo *root, Node *jtnode, RangeTblEntry *rte,
|
||||
|
||||
/*
|
||||
* Now we must recheck whether the subquery is still simple enough to pull
|
||||
* up. If not, abandon processing it.
|
||||
* up. If not, abandon processing it.
|
||||
*
|
||||
* We don't really need to recheck all the conditions involved, but it's
|
||||
* easier just to keep this "if" looking the same as the one in
|
||||
@ -862,7 +862,7 @@ pull_up_simple_subquery(PlannerInfo *root, Node *jtnode, RangeTblEntry *rte,
|
||||
* Give up, return unmodified RangeTblRef.
|
||||
*
|
||||
* Note: The work we just did will be redone when the subquery gets
|
||||
* planned on its own. Perhaps we could avoid that by storing the
|
||||
* planned on its own. Perhaps we could avoid that by storing the
|
||||
* modified subquery back into the rangetable, but I'm not gonna risk
|
||||
* it now.
|
||||
*/
|
||||
@ -903,7 +903,7 @@ pull_up_simple_subquery(PlannerInfo *root, Node *jtnode, RangeTblEntry *rte,
|
||||
* non-nullable items and lateral references may have to be turned into
|
||||
* PlaceHolderVars. If we are dealing with an appendrel member then
|
||||
* anything that's not a simple Var has to be turned into a
|
||||
* PlaceHolderVar. Set up required context data for pullup_replace_vars.
|
||||
* PlaceHolderVar. Set up required context data for pullup_replace_vars.
|
||||
*/
|
||||
rvcontext.root = root;
|
||||
rvcontext.targetlist = subquery->targetList;
|
||||
@ -928,7 +928,7 @@ pull_up_simple_subquery(PlannerInfo *root, Node *jtnode, RangeTblEntry *rte,
|
||||
* replace any of the jointree structure. (This'd be a lot cleaner if we
|
||||
* could use query_tree_mutator.) We have to use PHVs in the targetList,
|
||||
* returningList, and havingQual, since those are certainly above any
|
||||
* outer join. replace_vars_in_jointree tracks its location in the
|
||||
* outer join. replace_vars_in_jointree tracks its location in the
|
||||
* jointree and uses PHVs or not appropriately.
|
||||
*/
|
||||
parse->targetList = (List *)
|
||||
@ -1087,7 +1087,7 @@ pull_up_simple_subquery(PlannerInfo *root, Node *jtnode, RangeTblEntry *rte,
|
||||
* Pull up a single simple UNION ALL subquery.
|
||||
*
|
||||
* jtnode is a RangeTblRef that has been identified as a simple UNION ALL
|
||||
* subquery by pull_up_subqueries. We pull up the leaf subqueries and
|
||||
* subquery by pull_up_subqueries. We pull up the leaf subqueries and
|
||||
* build an "append relation" for the union set. The result value is just
|
||||
* jtnode, since we don't actually need to change the query jointree.
|
||||
*/
|
||||
@ -1101,7 +1101,7 @@ pull_up_simple_union_all(PlannerInfo *root, Node *jtnode, RangeTblEntry *rte)
|
||||
|
||||
/*
|
||||
* Make a modifiable copy of the subquery's rtable, so we can adjust
|
||||
* upper-level Vars in it. There are no such Vars in the setOperations
|
||||
* upper-level Vars in it. There are no such Vars in the setOperations
|
||||
* tree proper, so fixing the rtable should be sufficient.
|
||||
*/
|
||||
rtable = copyObject(subquery->rtable);
|
||||
@ -1373,7 +1373,7 @@ is_simple_subquery(Query *subquery, RangeTblEntry *rte,
|
||||
|
||||
/*
|
||||
* Don't pull up a subquery that has any set-returning functions in its
|
||||
* targetlist. Otherwise we might well wind up inserting set-returning
|
||||
* targetlist. Otherwise we might well wind up inserting set-returning
|
||||
* functions into places where they mustn't go, such as quals of higher
|
||||
* queries.
|
||||
*/
|
||||
@ -1382,7 +1382,7 @@ is_simple_subquery(Query *subquery, RangeTblEntry *rte,
|
||||
|
||||
/*
|
||||
* Don't pull up a subquery that has any volatile functions in its
|
||||
* targetlist. Otherwise we might introduce multiple evaluations of these
|
||||
* targetlist. Otherwise we might introduce multiple evaluations of these
|
||||
* functions, if they get copied to multiple places in the upper query,
|
||||
* leading to surprising results. (Note: the PlaceHolderVar mechanism
|
||||
* doesn't quite guarantee single evaluation; else we could pull up anyway
|
||||
@ -1612,7 +1612,7 @@ replace_vars_in_jointree(Node *jtnode,
|
||||
/*
|
||||
* If the RangeTblRef refers to a LATERAL subquery (that isn't the
|
||||
* same subquery we're pulling up), it might contain references to the
|
||||
* target subquery, which we must replace. We drive this from the
|
||||
* target subquery, which we must replace. We drive this from the
|
||||
* jointree scan, rather than a scan of the rtable, for a couple of
|
||||
* reasons: we can avoid processing no-longer-referenced RTEs, and we
|
||||
* can use the appropriate setting of need_phvs depending on whether
|
||||
@ -1773,7 +1773,7 @@ pullup_replace_vars_callback(Var *var,
|
||||
/*
|
||||
* Insert PlaceHolderVar if needed. Notice that we are wrapping one
|
||||
* PlaceHolderVar around the whole RowExpr, rather than putting one
|
||||
* around each element of the row. This is because we need the
|
||||
* around each element of the row. This is because we need the
|
||||
* expression to yield NULL, not ROW(NULL,NULL,...) when it is forced
|
||||
* to null by an outer join.
|
||||
*/
|
||||
@ -1875,7 +1875,7 @@ pullup_replace_vars_callback(Var *var,
|
||||
|
||||
/*
|
||||
* Cache it if possible (ie, if the attno is in range, which it
|
||||
* probably always should be). We can cache the value even if we
|
||||
* probably always should be). We can cache the value even if we
|
||||
* decided we didn't need a PHV, since this result will be
|
||||
* suitable for any request that has need_phvs.
|
||||
*/
|
||||
@ -1918,7 +1918,7 @@ pullup_replace_vars_subquery(Query *query,
|
||||
*
|
||||
* If a query's setOperations tree consists entirely of simple UNION ALL
|
||||
* operations, flatten it into an append relation, which we can process more
|
||||
* intelligently than the general setops case. Otherwise, do nothing.
|
||||
* intelligently than the general setops case. Otherwise, do nothing.
|
||||
*
|
||||
* In most cases, this can succeed only for a top-level query, because for a
|
||||
* subquery in FROM, the parent query's invocation of pull_up_subqueries would
|
||||
@ -2030,7 +2030,7 @@ flatten_simple_union_all(PlannerInfo *root)
|
||||
* SELECT ... FROM a LEFT JOIN b ON (a.x = b.y) WHERE b.y IS NULL;
|
||||
* If the join clause is strict for b.y, then only null-extended rows could
|
||||
* pass the upper WHERE, and we can conclude that what the query is really
|
||||
* specifying is an anti-semijoin. We change the join type from JOIN_LEFT
|
||||
* specifying is an anti-semijoin. We change the join type from JOIN_LEFT
|
||||
* to JOIN_ANTI. The IS NULL clause then becomes redundant, and must be
|
||||
* removed to prevent bogus selectivity calculations, but we leave it to
|
||||
* distribute_qual_to_rels to get rid of such clauses.
|
||||
@ -2270,7 +2270,7 @@ reduce_outer_joins_pass2(Node *jtnode,
|
||||
/*
|
||||
* See if we can reduce JOIN_LEFT to JOIN_ANTI. This is the case if
|
||||
* the join's own quals are strict for any var that was forced null by
|
||||
* higher qual levels. NOTE: there are other ways that we could
|
||||
* higher qual levels. NOTE: there are other ways that we could
|
||||
* detect an anti-join, in particular if we were to check whether Vars
|
||||
* coming from the RHS must be non-null because of table constraints.
|
||||
* That seems complicated and expensive though (in particular, one
|
||||
@ -2428,7 +2428,7 @@ reduce_outer_joins_pass2(Node *jtnode,
|
||||
* pulled-up relid, and change them to reference the replacement relid(s).
|
||||
*
|
||||
* NOTE: although this has the form of a walker, we cheat and modify the
|
||||
* nodes in-place. This should be OK since the tree was copied by
|
||||
* nodes in-place. This should be OK since the tree was copied by
|
||||
* pullup_replace_vars earlier. Avoid scribbling on the original values of
|
||||
* the bitmapsets, though, because expression_tree_mutator doesn't copy those.
|
||||
*/
|
||||
|
@ -54,12 +54,12 @@ static Expr *process_duplicate_ors(List *orlist);
|
||||
* Although this can be invoked on its own, it's mainly intended as a helper
|
||||
* for eval_const_expressions(), and that context drives several design
|
||||
* decisions. In particular, if the input is already AND/OR flat, we must
|
||||
* preserve that property. We also don't bother to recurse in situations
|
||||
* preserve that property. We also don't bother to recurse in situations
|
||||
* where we can assume that lower-level executions of eval_const_expressions
|
||||
* would already have simplified sub-clauses of the input.
|
||||
*
|
||||
* The difference between this and a simple make_notclause() is that this
|
||||
* tries to get rid of the NOT node by logical simplification. It's clearly
|
||||
* tries to get rid of the NOT node by logical simplification. It's clearly
|
||||
* always a win if the NOT node can be eliminated altogether. However, our
|
||||
* use of DeMorgan's laws could result in having more NOT nodes rather than
|
||||
* fewer. We do that unconditionally anyway, because in WHERE clauses it's
|
||||
@ -152,7 +152,7 @@ negate_clause(Node *node)
|
||||
* those properties. For example, if no direct child of
|
||||
* the given AND clause is an AND or a NOT-above-OR, then
|
||||
* the recursive calls of negate_clause() can't return any
|
||||
* OR clauses. So we needn't call pull_ors() before
|
||||
* OR clauses. So we needn't call pull_ors() before
|
||||
* building a new OR clause. Similarly for the OR case.
|
||||
*--------------------
|
||||
*/
|
||||
@ -293,7 +293,7 @@ canonicalize_qual(Expr *qual)
|
||||
/*
|
||||
* Pull up redundant subclauses in OR-of-AND trees. We do this only
|
||||
* within the top-level AND/OR structure; there's no point in looking
|
||||
* deeper. Also remove any NULL constants in the top-level structure.
|
||||
* deeper. Also remove any NULL constants in the top-level structure.
|
||||
*/
|
||||
newqual = find_duplicate_ors(qual);
|
||||
|
||||
@ -374,7 +374,7 @@ pull_ors(List *orlist)
|
||||
*
|
||||
* This may seem like a fairly useless activity, but it turns out to be
|
||||
* applicable to many machine-generated queries, and there are also queries
|
||||
* in some of the TPC benchmarks that need it. This was in fact almost the
|
||||
* in some of the TPC benchmarks that need it. This was in fact almost the
|
||||
* sole useful side-effect of the old prepqual code that tried to force
|
||||
* the query into canonical AND-of-ORs form: the canonical equivalent of
|
||||
* ((A AND B) OR (A AND C))
|
||||
@ -400,7 +400,7 @@ pull_ors(List *orlist)
|
||||
* results, so it's valid to treat NULL::boolean the same as FALSE and then
|
||||
* simplify AND/OR accordingly.
|
||||
*
|
||||
* Returns the modified qualification. AND/OR flatness is preserved.
|
||||
* Returns the modified qualification. AND/OR flatness is preserved.
|
||||
*/
|
||||
static Expr *
|
||||
find_duplicate_ors(Expr *qual)
|
||||
|
@ -4,7 +4,7 @@
|
||||
* Routines to preprocess the parse tree target list
|
||||
*
|
||||
* For INSERT and UPDATE queries, the targetlist must contain an entry for
|
||||
* each attribute of the target relation in the correct order. For all query
|
||||
* each attribute of the target relation in the correct order. For all query
|
||||
* types, we may need to add junk tlist entries for Vars used in the RETURNING
|
||||
* list and row ID information needed for SELECT FOR UPDATE locking and/or
|
||||
* EvalPlanQual checking.
|
||||
@ -79,7 +79,7 @@ preprocess_targetlist(PlannerInfo *root, List *tlist)
|
||||
/*
|
||||
* Add necessary junk columns for rowmarked rels. These values are needed
|
||||
* for locking of rels selected FOR UPDATE/SHARE, and to do EvalPlanQual
|
||||
* rechecking. See comments for PlanRowMark in plannodes.h.
|
||||
* rechecking. See comments for PlanRowMark in plannodes.h.
|
||||
*/
|
||||
foreach(lc, root->rowMarks)
|
||||
{
|
||||
@ -144,7 +144,7 @@ preprocess_targetlist(PlannerInfo *root, List *tlist)
|
||||
/*
|
||||
* If the query has a RETURNING list, add resjunk entries for any Vars
|
||||
* used in RETURNING that belong to other relations. We need to do this
|
||||
* to make these Vars available for the RETURNING calculation. Vars that
|
||||
* to make these Vars available for the RETURNING calculation. Vars that
|
||||
* belong to the result rel don't need to be added, because they will be
|
||||
* made to refer to the actual heap tuple.
|
||||
*/
|
||||
@ -252,9 +252,9 @@ expand_targetlist(List *tlist, int command_type,
|
||||
* When generating a NULL constant for a dropped column, we label
|
||||
* it INT4 (any other guaranteed-to-exist datatype would do as
|
||||
* well). We can't label it with the dropped column's datatype
|
||||
* since that might not exist anymore. It does not really matter
|
||||
* since that might not exist anymore. It does not really matter
|
||||
* what we claim the type is, since NULL is NULL --- its
|
||||
* representation is datatype-independent. This could perhaps
|
||||
* representation is datatype-independent. This could perhaps
|
||||
* confuse code comparing the finished plan to the target
|
||||
* relation, however.
|
||||
*/
|
||||
@ -336,7 +336,7 @@ expand_targetlist(List *tlist, int command_type,
|
||||
/*
|
||||
* The remaining tlist entries should be resjunk; append them all to the
|
||||
* end of the new tlist, making sure they have resnos higher than the last
|
||||
* real attribute. (Note: although the rewriter already did such
|
||||
* real attribute. (Note: although the rewriter already did such
|
||||
* renumbering, we have to do it again here in case we are doing an UPDATE
|
||||
* in a table with dropped columns, or an inheritance child table with
|
||||
* extra columns.)
|
||||
|
@ -6,14 +6,14 @@
|
||||
*
|
||||
* There are two code paths in the planner for set-operation queries.
|
||||
* If a subquery consists entirely of simple UNION ALL operations, it
|
||||
* is converted into an "append relation". Otherwise, it is handled
|
||||
* is converted into an "append relation". Otherwise, it is handled
|
||||
* by the general code in this module (plan_set_operations and its
|
||||
* subroutines). There is some support code here for the append-relation
|
||||
* case, but most of the heavy lifting for that is done elsewhere,
|
||||
* notably in prepjointree.c and allpaths.c.
|
||||
*
|
||||
* There is also some code here to support planning of queries that use
|
||||
* inheritance (SELECT FROM foo*). Inheritance trees are converted into
|
||||
* inheritance (SELECT FROM foo*). Inheritance trees are converted into
|
||||
* append relations, and thenceforth share code with the UNION ALL case.
|
||||
*
|
||||
*
|
||||
@ -576,7 +576,7 @@ generate_nonunion_plan(SetOperationStmt *op, PlannerInfo *root,
|
||||
*
|
||||
* The tlist for an Append plan isn't important as far as the Append is
|
||||
* concerned, but we must make it look real anyway for the benefit of the
|
||||
* next plan level up. In fact, it has to be real enough that the flag
|
||||
* next plan level up. In fact, it has to be real enough that the flag
|
||||
* column is shown as a variable not a constant, else setrefs.c will get
|
||||
* confused.
|
||||
*/
|
||||
@ -969,7 +969,7 @@ generate_setop_tlist(List *colTypes, List *colCollations,
|
||||
* Ensure the tlist entry's exposed collation matches the set-op. This
|
||||
* is necessary because plan_set_operations() reports the result
|
||||
* ordering as a list of SortGroupClauses, which don't carry collation
|
||||
* themselves but just refer to tlist entries. If we don't show the
|
||||
* themselves but just refer to tlist entries. If we don't show the
|
||||
* right collation then planner.c might do the wrong thing in
|
||||
* higher-level queries.
|
||||
*
|
||||
@ -1183,7 +1183,7 @@ generate_setop_grouplist(SetOperationStmt *op, List *targetlist)
|
||||
/*
|
||||
* expand_inherited_tables
|
||||
* Expand each rangetable entry that represents an inheritance set
|
||||
* into an "append relation". At the conclusion of this process,
|
||||
* into an "append relation". At the conclusion of this process,
|
||||
* the "inh" flag is set in all and only those RTEs that are append
|
||||
* relation parents.
|
||||
*/
|
||||
@ -1215,7 +1215,7 @@ expand_inherited_tables(PlannerInfo *root)
|
||||
* Check whether a rangetable entry represents an inheritance set.
|
||||
* If so, add entries for all the child tables to the query's
|
||||
* rangetable, and build AppendRelInfo nodes for all the child tables
|
||||
* and add them to root->append_rel_list. If not, clear the entry's
|
||||
* and add them to root->append_rel_list. If not, clear the entry's
|
||||
* "inh" flag to prevent later code from looking for AppendRelInfos.
|
||||
*
|
||||
* Note that the original RTE is considered to represent the whole
|
||||
@ -1526,7 +1526,7 @@ make_inh_translation_list(Relation oldrelation, Relation newrelation,
|
||||
* parent rel's attribute numbering to the child's.
|
||||
*
|
||||
* The only surprise here is that we don't translate a parent whole-row
|
||||
* reference into a child whole-row reference. That would mean requiring
|
||||
* reference into a child whole-row reference. That would mean requiring
|
||||
* permissions on all child columns, which is overly strict, since the
|
||||
* query is really only going to reference the inherited columns. Instead
|
||||
* we set the per-column bits for all inherited columns.
|
||||
@ -1855,7 +1855,7 @@ adjust_relid_set(Relids relids, Index oldrelid, Index newrelid)
|
||||
*
|
||||
* The expressions have already been fixed, but we have to make sure that
|
||||
* the target resnos match the child table (they may not, in the case of
|
||||
* a column that was added after-the-fact by ALTER TABLE). In some cases
|
||||
* a column that was added after-the-fact by ALTER TABLE). In some cases
|
||||
* this can force us to re-order the tlist to preserve resno ordering.
|
||||
* (We do all this work in special cases so that preptlist.c is fast for
|
||||
* the typical case.)
|
||||
|
@ -526,7 +526,7 @@ count_agg_clauses_walker(Node *node, count_agg_clauses_context *context)
|
||||
|
||||
/*
|
||||
* If the transition type is pass-by-value then it doesn't add
|
||||
* anything to the required size of the hashtable. If it is
|
||||
* anything to the required size of the hashtable. If it is
|
||||
* pass-by-reference then we have to add the estimated size of the
|
||||
* value itself, plus palloc overhead.
|
||||
*/
|
||||
@ -818,7 +818,7 @@ contain_subplans_walker(Node *node, void *context)
|
||||
* Recursively search for mutable functions within a clause.
|
||||
*
|
||||
* Returns true if any mutable function (or operator implemented by a
|
||||
* mutable function) is found. This test is needed so that we don't
|
||||
* mutable function) is found. This test is needed so that we don't
|
||||
* mistakenly think that something like "WHERE random() < 0.5" can be treated
|
||||
* as a constant qualification.
|
||||
*
|
||||
@ -945,7 +945,7 @@ contain_mutable_functions_walker(Node *node, void *context)
|
||||
* invalid conversions of volatile expressions into indexscan quals.
|
||||
*
|
||||
* We will recursively look into Query nodes (i.e., SubLink sub-selects)
|
||||
* but not into SubPlans. This is a bit odd, but intentional. If we are
|
||||
* but not into SubPlans. This is a bit odd, but intentional. If we are
|
||||
* looking at a SubLink, we are probably deciding whether a query tree
|
||||
* transformation is safe, and a contained sub-select should affect that;
|
||||
* for example, duplicating a sub-select containing a volatile function
|
||||
@ -1076,7 +1076,7 @@ contain_volatile_functions_walker(Node *node, void *context)
|
||||
* The idea here is that the caller has verified that the expression contains
|
||||
* one or more Var or Param nodes (as appropriate for the caller's need), and
|
||||
* now wishes to prove that the expression result will be NULL if any of these
|
||||
* inputs is NULL. If we return false, then the proof succeeded.
|
||||
* inputs is NULL. If we return false, then the proof succeeded.
|
||||
*/
|
||||
bool
|
||||
contain_nonstrict_functions(Node *clause)
|
||||
@ -1195,7 +1195,7 @@ contain_nonstrict_functions_walker(Node *node, void *context)
|
||||
* Recursively search for leaky functions within a clause.
|
||||
*
|
||||
* Returns true if any function call with side-effect may be present in the
|
||||
* clause. Qualifiers from outside the a security_barrier view should not
|
||||
* clause. Qualifiers from outside the a security_barrier view should not
|
||||
* be pushed down into the view, lest the contents of tuples intended to be
|
||||
* filtered out be revealed via side effects.
|
||||
*/
|
||||
@ -1334,7 +1334,7 @@ contain_leaky_functions_walker(Node *node, void *context)
|
||||
*
|
||||
* Returns the set of all Relids that are referenced in the clause in such
|
||||
* a way that the clause cannot possibly return TRUE if any of these Relids
|
||||
* is an all-NULL row. (It is OK to err on the side of conservatism; hence
|
||||
* is an all-NULL row. (It is OK to err on the side of conservatism; hence
|
||||
* the analysis here is simplistic.)
|
||||
*
|
||||
* The semantics here are subtly different from contain_nonstrict_functions:
|
||||
@ -1440,7 +1440,7 @@ find_nonnullable_rels_walker(Node *node, bool top_level)
|
||||
* could be FALSE (hence not NULL). However, if *all* the
|
||||
* arms produce NULL then the result is NULL, so we can take
|
||||
* the intersection of the sets of nonnullable rels, just as
|
||||
* for OR. Fall through to share code.
|
||||
* for OR. Fall through to share code.
|
||||
*/
|
||||
/* FALL THRU */
|
||||
case OR_EXPR:
|
||||
@ -1648,7 +1648,7 @@ find_nonnullable_vars_walker(Node *node, bool top_level)
|
||||
* could be FALSE (hence not NULL). However, if *all* the
|
||||
* arms produce NULL then the result is NULL, so we can take
|
||||
* the intersection of the sets of nonnullable vars, just as
|
||||
* for OR. Fall through to share code.
|
||||
* for OR. Fall through to share code.
|
||||
*/
|
||||
/* FALL THRU */
|
||||
case OR_EXPR:
|
||||
@ -1918,7 +1918,7 @@ is_strict_saop(ScalarArrayOpExpr *expr, bool falseOK)
|
||||
* variables of the current query level and no uses of volatile functions.
|
||||
* Such an expr is not necessarily a true constant: it can still contain
|
||||
* Params and outer-level Vars, not to mention functions whose results
|
||||
* may vary from one statement to the next. However, the expr's value
|
||||
* may vary from one statement to the next. However, the expr's value
|
||||
* will be constant over any one scan of the current query, so it can be
|
||||
* used as, eg, an indexscan key.
|
||||
*
|
||||
@ -2180,7 +2180,7 @@ rowtype_field_matches(Oid rowtypeid, int fieldnum,
|
||||
* expression tree, for example "2 + 2" => "4". More interestingly,
|
||||
* we can reduce certain boolean expressions even when they contain
|
||||
* non-constant subexpressions: "x OR true" => "true" no matter what
|
||||
* the subexpression x is. (XXX We assume that no such subexpression
|
||||
* the subexpression x is. (XXX We assume that no such subexpression
|
||||
* will have important side-effects, which is not necessarily a good
|
||||
* assumption in the presence of user-defined functions; do we need a
|
||||
* pg_proc flag that prevents discarding the execution of a function?)
|
||||
@ -2193,7 +2193,7 @@ rowtype_field_matches(Oid rowtypeid, int fieldnum,
|
||||
*
|
||||
* Whenever a function is eliminated from the expression by means of
|
||||
* constant-expression evaluation or inlining, we add the function to
|
||||
* root->glob->invalItems. This ensures the plan is known to depend on
|
||||
* root->glob->invalItems. This ensures the plan is known to depend on
|
||||
* such functions, even though they aren't referenced anymore.
|
||||
*
|
||||
* We assume that the tree has already been type-checked and contains
|
||||
@ -2370,7 +2370,7 @@ eval_const_expressions_mutator(Node *node,
|
||||
|
||||
/*
|
||||
* Code for op/func reduction is pretty bulky, so split it out
|
||||
* as a separate function. Note: exprTypmod normally returns
|
||||
* as a separate function. Note: exprTypmod normally returns
|
||||
* -1 for a FuncExpr, but not when the node is recognizably a
|
||||
* length coercion; we want to preserve the typmod in the
|
||||
* eventual Const if so.
|
||||
@ -2414,7 +2414,7 @@ eval_const_expressions_mutator(Node *node,
|
||||
OpExpr *newexpr;
|
||||
|
||||
/*
|
||||
* Need to get OID of underlying function. Okay to scribble
|
||||
* Need to get OID of underlying function. Okay to scribble
|
||||
* on input to this extent.
|
||||
*/
|
||||
set_opfuncid(expr);
|
||||
@ -2517,7 +2517,7 @@ eval_const_expressions_mutator(Node *node,
|
||||
/* (NOT okay to try to inline it, though!) */
|
||||
|
||||
/*
|
||||
* Need to get OID of underlying function. Okay to
|
||||
* Need to get OID of underlying function. Okay to
|
||||
* scribble on input to this extent.
|
||||
*/
|
||||
set_opfuncid((OpExpr *) expr); /* rely on struct
|
||||
@ -2882,13 +2882,13 @@ eval_const_expressions_mutator(Node *node,
|
||||
* TRUE: drop all remaining alternatives
|
||||
* If the first non-FALSE alternative is a constant TRUE,
|
||||
* we can simplify the entire CASE to that alternative's
|
||||
* expression. If there are no non-FALSE alternatives,
|
||||
* expression. If there are no non-FALSE alternatives,
|
||||
* we simplify the entire CASE to the default result (ELSE).
|
||||
*
|
||||
* If we have a simple-form CASE with constant test
|
||||
* expression, we substitute the constant value for contained
|
||||
* CaseTestExpr placeholder nodes, so that we have the
|
||||
* opportunity to reduce constant test conditions. For
|
||||
* opportunity to reduce constant test conditions. For
|
||||
* example this allows
|
||||
* CASE 0 WHEN 0 THEN 1 ELSE 1/0 END
|
||||
* to reduce to 1 rather than drawing a divide-by-0 error.
|
||||
@ -3110,7 +3110,7 @@ eval_const_expressions_mutator(Node *node,
|
||||
{
|
||||
/*
|
||||
* We can optimize field selection from a whole-row Var into a
|
||||
* simple Var. (This case won't be generated directly by the
|
||||
* simple Var. (This case won't be generated directly by the
|
||||
* parser, because ParseComplexProjection short-circuits it.
|
||||
* But it can arise while simplifying functions.) Also, we
|
||||
* can optimize field selection from a RowExpr construct.
|
||||
@ -3368,7 +3368,7 @@ simplify_or_arguments(List *args,
|
||||
/*
|
||||
* Since the parser considers OR to be a binary operator, long OR lists
|
||||
* become deeply nested expressions. We must flatten these into long
|
||||
* argument lists of a single OR operator. To avoid blowing out the stack
|
||||
* argument lists of a single OR operator. To avoid blowing out the stack
|
||||
* with recursion of eval_const_expressions, we resort to some tenseness
|
||||
* here: we keep a list of not-yet-processed inputs, and handle flattening
|
||||
* of nested ORs by prepending to the to-do list instead of recursing.
|
||||
@ -3416,7 +3416,7 @@ simplify_or_arguments(List *args,
|
||||
}
|
||||
|
||||
/*
|
||||
* OK, we have a const-simplified non-OR argument. Process it per
|
||||
* OK, we have a const-simplified non-OR argument. Process it per
|
||||
* comments above.
|
||||
*/
|
||||
if (IsA(arg, Const))
|
||||
@ -3651,7 +3651,7 @@ simplify_function(Oid funcid, Oid result_type, int32 result_typmod,
|
||||
* deliver a constant result, use a transform function to generate a
|
||||
* substitute node tree, or expand in-line the body of the function
|
||||
* definition (which only works for simple SQL-language functions, but
|
||||
* that is a common case). Each case needs access to the function's
|
||||
* that is a common case). Each case needs access to the function's
|
||||
* pg_proc tuple, so fetch it just once.
|
||||
*
|
||||
* Note: the allow_non_const flag suppresses both the second and third
|
||||
@ -3689,7 +3689,7 @@ simplify_function(Oid funcid, Oid result_type, int32 result_typmod,
|
||||
if (!newexpr && allow_non_const && OidIsValid(func_form->protransform))
|
||||
{
|
||||
/*
|
||||
* Build a dummy FuncExpr node containing the simplified arg list. We
|
||||
* Build a dummy FuncExpr node containing the simplified arg list. We
|
||||
* use this approach to present a uniform interface to the transform
|
||||
* function regardless of how the function is actually being invoked.
|
||||
*/
|
||||
@ -3897,7 +3897,7 @@ fetch_function_defaults(HeapTuple func_tuple)
|
||||
*
|
||||
* It is possible for some of the defaulted arguments to be polymorphic;
|
||||
* therefore we can't assume that the default expressions have the correct
|
||||
* data types already. We have to re-resolve polymorphics and do coercion
|
||||
* data types already. We have to re-resolve polymorphics and do coercion
|
||||
* just like the parser did.
|
||||
*
|
||||
* This should be a no-op if there are no polymorphic arguments,
|
||||
@ -4060,7 +4060,7 @@ evaluate_function(Oid funcid, Oid result_type, int32 result_typmod,
|
||||
* do not re-expand them. Also, if a parameter is used more than once
|
||||
* in the SQL-function body, we require it not to contain any volatile
|
||||
* functions (volatiles might deliver inconsistent answers) nor to be
|
||||
* unreasonably expensive to evaluate. The expensiveness check not only
|
||||
* unreasonably expensive to evaluate. The expensiveness check not only
|
||||
* prevents us from doing multiple evaluations of an expensive parameter
|
||||
* at runtime, but is a safety value to limit growth of an expression due
|
||||
* to repeated inlining.
|
||||
@ -4103,7 +4103,7 @@ inline_function(Oid funcid, Oid result_type, Oid result_collid,
|
||||
|
||||
/*
|
||||
* Forget it if the function is not SQL-language or has other showstopper
|
||||
* properties. (The nargs check is just paranoia.)
|
||||
* properties. (The nargs check is just paranoia.)
|
||||
*/
|
||||
if (funcform->prolang != SQLlanguageId ||
|
||||
funcform->prosecdef ||
|
||||
@ -4181,7 +4181,7 @@ inline_function(Oid funcid, Oid result_type, Oid result_collid,
|
||||
/*
|
||||
* We just do parsing and parse analysis, not rewriting, because rewriting
|
||||
* will not affect table-free-SELECT-only queries, which is all that we
|
||||
* care about. Also, we can punt as soon as we detect more than one
|
||||
* care about. Also, we can punt as soon as we detect more than one
|
||||
* command in the function body.
|
||||
*/
|
||||
raw_parsetree_list = pg_parse_query(src);
|
||||
@ -4223,7 +4223,7 @@ inline_function(Oid funcid, Oid result_type, Oid result_collid,
|
||||
/*
|
||||
* Make sure the function (still) returns what it's declared to. This
|
||||
* will raise an error if wrong, but that's okay since the function would
|
||||
* fail at runtime anyway. Note that check_sql_fn_retval will also insert
|
||||
* fail at runtime anyway. Note that check_sql_fn_retval will also insert
|
||||
* a RelabelType if needed to make the tlist expression match the declared
|
||||
* type of the function.
|
||||
*
|
||||
@ -4268,7 +4268,7 @@ inline_function(Oid funcid, Oid result_type, Oid result_collid,
|
||||
/*
|
||||
* We may be able to do it; there are still checks on parameter usage to
|
||||
* make, but those are most easily done in combination with the actual
|
||||
* substitution of the inputs. So start building expression with inputs
|
||||
* substitution of the inputs. So start building expression with inputs
|
||||
* substituted.
|
||||
*/
|
||||
usecounts = (int *) palloc0(funcform->pronargs * sizeof(int));
|
||||
@ -4468,7 +4468,7 @@ evaluate_expr(Expr *expr, Oid result_type, int32 result_typmod,
|
||||
fix_opfuncids((Node *) expr);
|
||||
|
||||
/*
|
||||
* Prepare expr for execution. (Note: we can't use ExecPrepareExpr
|
||||
* Prepare expr for execution. (Note: we can't use ExecPrepareExpr
|
||||
* because it'd result in recursively invoking eval_const_expressions.)
|
||||
*/
|
||||
exprstate = ExecInitExpr(expr, NULL);
|
||||
@ -4580,7 +4580,7 @@ inline_set_returning_function(PlannerInfo *root, RangeTblEntry *rte)
|
||||
* Refuse to inline if the arguments contain any volatile functions or
|
||||
* sub-selects. Volatile functions are rejected because inlining may
|
||||
* result in the arguments being evaluated multiple times, risking a
|
||||
* change in behavior. Sub-selects are rejected partly for implementation
|
||||
* change in behavior. Sub-selects are rejected partly for implementation
|
||||
* reasons (pushing them down another level might change their behavior)
|
||||
* and partly because they're likely to be expensive and so multiple
|
||||
* evaluation would be bad.
|
||||
@ -4607,7 +4607,7 @@ inline_set_returning_function(PlannerInfo *root, RangeTblEntry *rte)
|
||||
|
||||
/*
|
||||
* Forget it if the function is not SQL-language or has other showstopper
|
||||
* properties. In particular it mustn't be declared STRICT, since we
|
||||
* properties. In particular it mustn't be declared STRICT, since we
|
||||
* couldn't enforce that. It also mustn't be VOLATILE, because that is
|
||||
* supposed to cause it to be executed with its own snapshot, rather than
|
||||
* sharing the snapshot of the calling query. (Rechecking proretset is
|
||||
@ -4637,9 +4637,9 @@ inline_set_returning_function(PlannerInfo *root, RangeTblEntry *rte)
|
||||
|
||||
/*
|
||||
* When we call eval_const_expressions below, it might try to add items to
|
||||
* root->glob->invalItems. Since it is running in the temp context, those
|
||||
* root->glob->invalItems. Since it is running in the temp context, those
|
||||
* items will be in that context, and will need to be copied out if we're
|
||||
* successful. Temporarily reset the list so that we can keep those items
|
||||
* successful. Temporarily reset the list so that we can keep those items
|
||||
* separate from the pre-existing list contents.
|
||||
*/
|
||||
saveInvalItems = root->glob->invalItems;
|
||||
@ -4669,7 +4669,7 @@ inline_set_returning_function(PlannerInfo *root, RangeTblEntry *rte)
|
||||
/*
|
||||
* Run eval_const_expressions on the function call. This is necessary to
|
||||
* ensure that named-argument notation is converted to positional notation
|
||||
* and any default arguments are inserted. It's a bit of overkill for the
|
||||
* and any default arguments are inserted. It's a bit of overkill for the
|
||||
* arguments, since they'll get processed again later, but no harm will be
|
||||
* done.
|
||||
*/
|
||||
@ -4721,7 +4721,7 @@ inline_set_returning_function(PlannerInfo *root, RangeTblEntry *rte)
|
||||
/*
|
||||
* Make sure the function (still) returns what it's declared to. This
|
||||
* will raise an error if wrong, but that's okay since the function would
|
||||
* fail at runtime anyway. Note that check_sql_fn_retval will also insert
|
||||
* fail at runtime anyway. Note that check_sql_fn_retval will also insert
|
||||
* RelabelType(s) and/or NULL columns if needed to make the tlist
|
||||
* expression(s) match the declared type of the function.
|
||||
*
|
||||
|
@ -83,7 +83,7 @@ have_relevant_joinclause(PlannerInfo *root,
|
||||
* Add 'restrictinfo' to the joininfo list of each relation it requires.
|
||||
*
|
||||
* Note that the same copy of the restrictinfo node is linked to by all the
|
||||
* lists it is in. This allows us to exploit caching of information about
|
||||
* lists it is in. This allows us to exploit caching of information about
|
||||
* the restriction clause (but we must be careful that the information does
|
||||
* not depend on context).
|
||||
*
|
||||
|
@ -127,11 +127,11 @@ compare_fractional_path_costs(Path *path1, Path *path2,
|
||||
*
|
||||
* The fuzz_factor argument must be 1.0 plus delta, where delta is the
|
||||
* fraction of the smaller cost that is considered to be a significant
|
||||
* difference. For example, fuzz_factor = 1.01 makes the fuzziness limit
|
||||
* difference. For example, fuzz_factor = 1.01 makes the fuzziness limit
|
||||
* be 1% of the smaller cost.
|
||||
*
|
||||
* The two paths are said to have "equal" costs if both startup and total
|
||||
* costs are fuzzily the same. Path1 is said to be better than path2 if
|
||||
* costs are fuzzily the same. Path1 is said to be better than path2 if
|
||||
* it has fuzzily better startup cost and fuzzily no worse total cost,
|
||||
* or if it has fuzzily better total cost and fuzzily no worse startup cost.
|
||||
* Path2 is better than path1 if the reverse holds. Finally, if one path
|
||||
@ -207,12 +207,12 @@ compare_path_costs_fuzzily(Path *path1, Path *path2, double fuzz_factor,
|
||||
*
|
||||
* cheapest_total_path is normally the cheapest-total-cost unparameterized
|
||||
* path; but if there are no unparameterized paths, we assign it to be the
|
||||
* best (cheapest least-parameterized) parameterized path. However, only
|
||||
* best (cheapest least-parameterized) parameterized path. However, only
|
||||
* unparameterized paths are considered candidates for cheapest_startup_path,
|
||||
* so that will be NULL if there are no unparameterized paths.
|
||||
*
|
||||
* The cheapest_parameterized_paths list collects all parameterized paths
|
||||
* that have survived the add_path() tournament for this relation. (Since
|
||||
* that have survived the add_path() tournament for this relation. (Since
|
||||
* add_path ignores pathkeys and startup cost for a parameterized path,
|
||||
* these will be paths that have best total cost or best row count for their
|
||||
* parameterization.) cheapest_parameterized_paths always includes the
|
||||
@ -431,7 +431,7 @@ add_path(RelOptInfo *parent_rel, Path *new_path)
|
||||
p1_next = lnext(p1);
|
||||
|
||||
/*
|
||||
* Do a fuzzy cost comparison with 1% fuzziness limit. (XXX does this
|
||||
* Do a fuzzy cost comparison with 1% fuzziness limit. (XXX does this
|
||||
* percentage need to be user-configurable?)
|
||||
*/
|
||||
costcmp = compare_path_costs_fuzzily(new_path, old_path, 1.01,
|
||||
@ -607,7 +607,7 @@ add_path(RelOptInfo *parent_rel, Path *new_path)
|
||||
* and have lower bounds for its costs.
|
||||
*
|
||||
* Note that we do not know the path's rowcount, since getting an estimate for
|
||||
* that is too expensive to do before prechecking. We assume here that paths
|
||||
* that is too expensive to do before prechecking. We assume here that paths
|
||||
* of a superset parameterization will generate fewer rows; if that holds,
|
||||
* then paths with different parameterizations cannot dominate each other
|
||||
* and so we can simply ignore existing paths of another parameterization.
|
||||
@ -907,7 +907,7 @@ create_append_path(RelOptInfo *rel, List *subpaths, Relids required_outer)
|
||||
* Compute rows and costs as sums of subplan rows and costs. We charge
|
||||
* nothing extra for the Append itself, which perhaps is too optimistic,
|
||||
* but since it doesn't do any selection or projection, it is a pretty
|
||||
* cheap node. If you change this, see also make_append().
|
||||
* cheap node. If you change this, see also make_append().
|
||||
*/
|
||||
pathnode->path.rows = 0;
|
||||
pathnode->path.startup_cost = 0;
|
||||
@ -1456,7 +1456,7 @@ translate_sub_tlist(List *tlist, int relid)
|
||||
*
|
||||
* colnos is an integer list of output column numbers (resno's). We are
|
||||
* interested in whether rows consisting of just these columns are certain
|
||||
* to be distinct. "Distinctness" is defined according to whether the
|
||||
* to be distinct. "Distinctness" is defined according to whether the
|
||||
* corresponding upper-level equality operators listed in opids would think
|
||||
* the values are distinct. (Note: the opids entries could be cross-type
|
||||
* operators, and thus not exactly the equality operators that the subquery
|
||||
@ -1577,7 +1577,7 @@ query_is_distinct_for(Query *query, List *colnos, List *opids)
|
||||
* distinct_col_search - subroutine for query_is_distinct_for
|
||||
*
|
||||
* If colno is in colnos, return the corresponding element of opids,
|
||||
* else return InvalidOid. (We expect colnos does not contain duplicates,
|
||||
* else return InvalidOid. (We expect colnos does not contain duplicates,
|
||||
* so the result is well-defined.)
|
||||
*/
|
||||
static Oid
|
||||
@ -1977,10 +1977,10 @@ create_hashjoin_path(PlannerInfo *root,
|
||||
|
||||
/*
|
||||
* A hashjoin never has pathkeys, since its output ordering is
|
||||
* unpredictable due to possible batching. XXX If the inner relation is
|
||||
* unpredictable due to possible batching. XXX If the inner relation is
|
||||
* small enough, we could instruct the executor that it must not batch,
|
||||
* and then we could assume that the output inherits the outer relation's
|
||||
* ordering, which might save a sort step. However there is considerable
|
||||
* ordering, which might save a sort step. However there is considerable
|
||||
* downside if our estimate of the inner relation size is badly off. For
|
||||
* the moment we don't risk it. (Note also that if we wanted to take this
|
||||
* seriously, joinpath.c would have to consider many more paths for the
|
||||
@ -2007,7 +2007,7 @@ create_hashjoin_path(PlannerInfo *root,
|
||||
* same parameterization level, ensuring that they all enforce the same set
|
||||
* of join quals (and thus that that parameterization can be attributed to
|
||||
* an append path built from such paths). Currently, only a few path types
|
||||
* are supported here, though more could be added at need. We return NULL
|
||||
* are supported here, though more could be added at need. We return NULL
|
||||
* if we can't reparameterize the given path.
|
||||
*
|
||||
* Note: we intentionally do not pass created paths to add_path(); it would
|
||||
@ -2039,7 +2039,7 @@ reparameterize_path(PlannerInfo *root, Path *path,
|
||||
/*
|
||||
* We can't use create_index_path directly, and would not want
|
||||
* to because it would re-compute the indexqual conditions
|
||||
* which is wasted effort. Instead we hack things a bit:
|
||||
* which is wasted effort. Instead we hack things a bit:
|
||||
* flat-copy the path node, revise its param_info, and redo
|
||||
* the cost estimate.
|
||||
*/
|
||||
|
@ -60,7 +60,7 @@ make_placeholder_expr(PlannerInfo *root, Expr *expr, Relids phrels)
|
||||
* We build PlaceHolderInfos only for PHVs that are still present in the
|
||||
* simplified query passed to query_planner().
|
||||
*
|
||||
* Note: this should only be called after query_planner() has started. Also,
|
||||
* Note: this should only be called after query_planner() has started. Also,
|
||||
* create_new_ph must not be TRUE after deconstruct_jointree begins, because
|
||||
* make_outerjoininfo assumes that we already know about all placeholders.
|
||||
*/
|
||||
@ -94,7 +94,7 @@ find_placeholder_info(PlannerInfo *root, PlaceHolderVar *phv,
|
||||
/*
|
||||
* Any referenced rels that are outside the PHV's syntactic scope are
|
||||
* LATERAL references, which should be included in ph_lateral but not in
|
||||
* ph_eval_at. If no referenced rels are within the syntactic scope,
|
||||
* ph_eval_at. If no referenced rels are within the syntactic scope,
|
||||
* force evaluation at the syntactic location.
|
||||
*/
|
||||
rels_used = pull_varnos((Node *) phv->phexpr);
|
||||
|
@ -427,12 +427,12 @@ estimate_rel_size(Relation rel, int32 *attr_widths,
|
||||
* minimum size estimate of 10 pages. The idea here is to avoid
|
||||
* assuming a newly-created table is really small, even if it
|
||||
* currently is, because that may not be true once some data gets
|
||||
* loaded into it. Once a vacuum or analyze cycle has been done
|
||||
* loaded into it. Once a vacuum or analyze cycle has been done
|
||||
* on it, it's more reasonable to believe the size is somewhat
|
||||
* stable.
|
||||
*
|
||||
* (Note that this is only an issue if the plan gets cached and
|
||||
* used again after the table has been filled. What we're trying
|
||||
* used again after the table has been filled. What we're trying
|
||||
* to avoid is using a nestloop-type plan on a table that has
|
||||
* grown substantially since the plan was made. Normally,
|
||||
* autovacuum/autoanalyze will occur once enough inserts have
|
||||
@ -441,7 +441,7 @@ estimate_rel_size(Relation rel, int32 *attr_widths,
|
||||
* such as temporary tables.)
|
||||
*
|
||||
* We approximate "never vacuumed" by "has relpages = 0", which
|
||||
* means this will also fire on genuinely empty relations. Not
|
||||
* means this will also fire on genuinely empty relations. Not
|
||||
* great, but fortunately that's a seldom-seen case in the real
|
||||
* world, and it shouldn't degrade the quality of the plan too
|
||||
* much anyway to err in this direction.
|
||||
@ -786,7 +786,7 @@ relation_excluded_by_constraints(PlannerInfo *root,
|
||||
return false;
|
||||
|
||||
/*
|
||||
* OK to fetch the constraint expressions. Include "col IS NOT NULL"
|
||||
* OK to fetch the constraint expressions. Include "col IS NOT NULL"
|
||||
* expressions for attnotnull columns, in case we can refute those.
|
||||
*/
|
||||
constraint_pred = get_relation_constraints(root, rte->relid, rel, true);
|
||||
@ -834,7 +834,7 @@ relation_excluded_by_constraints(PlannerInfo *root,
|
||||
* Exception: if there are any dropped columns, we punt and return NIL.
|
||||
* Ideally we would like to handle the dropped-column case too. However this
|
||||
* creates problems for ExecTypeFromTL, which may be asked to build a tupdesc
|
||||
* for a tlist that includes vars of no-longer-existent types. In theory we
|
||||
* for a tlist that includes vars of no-longer-existent types. In theory we
|
||||
* could dig out the required info from the pg_attribute entries of the
|
||||
* relation, but that data is not readily available to ExecTypeFromTL.
|
||||
* For now, we don't apply the physical-tlist optimization when there are
|
||||
|
@ -133,7 +133,7 @@ predicate_implied_by(List *predicate_list, List *restrictinfo_list)
|
||||
|
||||
/*
|
||||
* If either input is a single-element list, replace it with its lone
|
||||
* member; this avoids one useless level of AND-recursion. We only need
|
||||
* member; this avoids one useless level of AND-recursion. We only need
|
||||
* to worry about this at top level, since eval_const_expressions should
|
||||
* have gotten rid of any trivial ANDs or ORs below that.
|
||||
*/
|
||||
@ -191,7 +191,7 @@ predicate_refuted_by(List *predicate_list, List *restrictinfo_list)
|
||||
|
||||
/*
|
||||
* If either input is a single-element list, replace it with its lone
|
||||
* member; this avoids one useless level of AND-recursion. We only need
|
||||
* member; this avoids one useless level of AND-recursion. We only need
|
||||
* to worry about this at top level, since eval_const_expressions should
|
||||
* have gotten rid of any trivial ANDs or ORs below that.
|
||||
*/
|
||||
@ -225,7 +225,7 @@ predicate_refuted_by(List *predicate_list, List *restrictinfo_list)
|
||||
* OR-expr A => AND-expr B iff: A => each of B's components
|
||||
* OR-expr A => OR-expr B iff: each of A's components => any of B's
|
||||
*
|
||||
* An "atom" is anything other than an AND or OR node. Notice that we don't
|
||||
* An "atom" is anything other than an AND or OR node. Notice that we don't
|
||||
* have any special logic to handle NOT nodes; these should have been pushed
|
||||
* down or eliminated where feasible by prepqual.c.
|
||||
*
|
||||
@ -658,7 +658,7 @@ predicate_refuted_by_recurse(Node *clause, Node *predicate)
|
||||
* We cannot make the stronger conclusion that B is refuted if B
|
||||
* implies A's arg; that would only prove that B is not-TRUE, not
|
||||
* that it's not NULL either. Hence use equal() rather than
|
||||
* predicate_implied_by_recurse(). We could do the latter if we
|
||||
* predicate_implied_by_recurse(). We could do the latter if we
|
||||
* ever had a need for the weak form of refutation.
|
||||
*/
|
||||
not_arg = extract_strong_not_arg(clause);
|
||||
@ -820,7 +820,7 @@ predicate_classify(Node *clause, PredIterInfo info)
|
||||
}
|
||||
|
||||
/*
|
||||
* PredIterInfo routines for iterating over regular Lists. The iteration
|
||||
* PredIterInfo routines for iterating over regular Lists. The iteration
|
||||
* state variable is the next ListCell to visit.
|
||||
*/
|
||||
static void
|
||||
@ -1014,13 +1014,13 @@ arrayexpr_cleanup_fn(PredIterInfo info)
|
||||
* implies another:
|
||||
*
|
||||
* A simple and general way is to see if they are equal(); this works for any
|
||||
* kind of expression. (Actually, there is an implied assumption that the
|
||||
* kind of expression. (Actually, there is an implied assumption that the
|
||||
* functions in the expression are immutable, ie dependent only on their input
|
||||
* arguments --- but this was checked for the predicate by the caller.)
|
||||
*
|
||||
* When the predicate is of the form "foo IS NOT NULL", we can conclude that
|
||||
* the predicate is implied if the clause is a strict operator or function
|
||||
* that has "foo" as an input. In this case the clause must yield NULL when
|
||||
* that has "foo" as an input. In this case the clause must yield NULL when
|
||||
* "foo" is NULL, which we can take as equivalent to FALSE because we know
|
||||
* we are within an AND/OR subtree of a WHERE clause. (Again, "foo" is
|
||||
* already known immutable, so the clause will certainly always fail.)
|
||||
@ -1244,7 +1244,7 @@ list_member_strip(List *list, Expr *datum)
|
||||
*
|
||||
* The strategy numbers defined by btree indexes (see access/skey.h) are:
|
||||
* (1) < (2) <= (3) = (4) >= (5) >
|
||||
* and in addition we use (6) to represent <>. <> is not a btree-indexable
|
||||
* and in addition we use (6) to represent <>. <> is not a btree-indexable
|
||||
* operator, but we assume here that if an equality operator of a btree
|
||||
* opfamily has a negator operator, the negator behaves as <> for the opfamily.
|
||||
* (This convention is also known to get_op_btree_interpretation().)
|
||||
@ -1328,7 +1328,7 @@ static const StrategyNumber BT_refute_table[6][6] = {
|
||||
* if not able to prove it.
|
||||
*
|
||||
* What we look for here is binary boolean opclauses of the form
|
||||
* "foo op constant", where "foo" is the same in both clauses. The operators
|
||||
* "foo op constant", where "foo" is the same in both clauses. The operators
|
||||
* and constants can be different but the operators must be in the same btree
|
||||
* operator family. We use the above operator implication tables to
|
||||
* derive implications between nonidentical clauses. (Note: "foo" is known
|
||||
@ -1418,7 +1418,7 @@ btree_predicate_proof(Expr *predicate, Node *clause, bool refute_it)
|
||||
/*
|
||||
* Check for matching subexpressions on the non-Const sides. We used to
|
||||
* only allow a simple Var, but it's about as easy to allow any
|
||||
* expression. Remember we already know that the pred expression does not
|
||||
* expression. Remember we already know that the pred expression does not
|
||||
* contain any non-immutable functions, so identical expressions should
|
||||
* yield identical results.
|
||||
*/
|
||||
@ -1690,7 +1690,7 @@ get_btree_test_op(Oid pred_op, Oid clause_op, bool refute_it)
|
||||
* Last check: test_op must be immutable.
|
||||
*
|
||||
* Note that we require only the test_op to be immutable, not the
|
||||
* original clause_op. (pred_op is assumed to have been checked
|
||||
* original clause_op. (pred_op is assumed to have been checked
|
||||
* immutable by the caller.) Essentially we are assuming that the
|
||||
* opfamily is consistent even if it contains operators that are
|
||||
* merely stable.
|
||||
|
@ -262,7 +262,7 @@ RelOptInfo *
|
||||
find_join_rel(PlannerInfo *root, Relids relids)
|
||||
{
|
||||
/*
|
||||
* Switch to using hash lookup when list grows "too long". The threshold
|
||||
* Switch to using hash lookup when list grows "too long". The threshold
|
||||
* is arbitrary and is known only here.
|
||||
*/
|
||||
if (!root->join_rel_hash && list_length(root->join_rel_list) > 32)
|
||||
@ -448,7 +448,7 @@ build_join_rel(PlannerInfo *root,
|
||||
|
||||
/*
|
||||
* Also, if dynamic-programming join search is active, add the new joinrel
|
||||
* to the appropriate sublist. Note: you might think the Assert on number
|
||||
* to the appropriate sublist. Note: you might think the Assert on number
|
||||
* of members should be for equality, but some of the level 1 rels might
|
||||
* have been joinrels already, so we can only assert <=.
|
||||
*/
|
||||
@ -529,7 +529,7 @@ build_joinrel_tlist(PlannerInfo *root, RelOptInfo *joinrel,
|
||||
* the join list need only be computed once for any join RelOptInfo.
|
||||
* The join list is fully determined by the set of rels making up the
|
||||
* joinrel, so we should get the same results (up to ordering) from any
|
||||
* candidate pair of sub-relations. But the restriction list is whatever
|
||||
* candidate pair of sub-relations. But the restriction list is whatever
|
||||
* is not handled in the sub-relations, so it depends on which
|
||||
* sub-relations are considered.
|
||||
*
|
||||
@ -538,7 +538,7 @@ build_joinrel_tlist(PlannerInfo *root, RelOptInfo *joinrel,
|
||||
* we put it into the joininfo list for the joinrel. Otherwise,
|
||||
* the clause is now a restrict clause for the joined relation, and we
|
||||
* return it to the caller of build_joinrel_restrictlist() to be stored in
|
||||
* join paths made from this pair of sub-relations. (It will not need to
|
||||
* join paths made from this pair of sub-relations. (It will not need to
|
||||
* be considered further up the join tree.)
|
||||
*
|
||||
* In many case we will find the same RestrictInfos in both input
|
||||
@ -557,7 +557,7 @@ build_joinrel_tlist(PlannerInfo *root, RelOptInfo *joinrel,
|
||||
*
|
||||
* NB: Formerly, we made deep(!) copies of each input RestrictInfo to pass
|
||||
* up to the join relation. I believe this is no longer necessary, because
|
||||
* RestrictInfo nodes are no longer context-dependent. Instead, just include
|
||||
* RestrictInfo nodes are no longer context-dependent. Instead, just include
|
||||
* the original nodes in the lists made for the join relation.
|
||||
*/
|
||||
static List *
|
||||
@ -577,7 +577,7 @@ build_joinrel_restrictlist(PlannerInfo *root,
|
||||
result = subbuild_joinrel_restrictlist(joinrel, inner_rel->joininfo, result);
|
||||
|
||||
/*
|
||||
* Add on any clauses derived from EquivalenceClasses. These cannot be
|
||||
* Add on any clauses derived from EquivalenceClasses. These cannot be
|
||||
* redundant with the clauses in the joininfo lists, so don't bother
|
||||
* checking.
|
||||
*/
|
||||
@ -915,7 +915,7 @@ get_joinrel_parampathinfo(PlannerInfo *root, RelOptInfo *joinrel,
|
||||
*restrict_clauses);
|
||||
|
||||
/*
|
||||
* And now we can build the ParamPathInfo. No point in saving the
|
||||
* And now we can build the ParamPathInfo. No point in saving the
|
||||
* input-pair-dependent clause list, though.
|
||||
*
|
||||
* Note: in GEQO mode, we'll be called in a temporary memory context, but
|
||||
@ -935,8 +935,8 @@ get_joinrel_parampathinfo(PlannerInfo *root, RelOptInfo *joinrel,
|
||||
* Get the ParamPathInfo for a parameterized path for an append relation.
|
||||
*
|
||||
* For an append relation, the rowcount estimate will just be the sum of
|
||||
* the estimates for its children. However, we still need a ParamPathInfo
|
||||
* to flag the fact that the path requires parameters. So this just creates
|
||||
* the estimates for its children. However, we still need a ParamPathInfo
|
||||
* to flag the fact that the path requires parameters. So this just creates
|
||||
* a suitable struct with zero ppi_rows (and no ppi_clauses either, since
|
||||
* the Append node isn't responsible for checking quals).
|
||||
*/
|
||||
|
@ -152,7 +152,7 @@ make_restrictinfo_from_bitmapqual(Path *bitmapqual,
|
||||
/*
|
||||
* Here, we only detect qual-free subplans. A qual-free subplan would
|
||||
* cause us to generate "... OR true ..." which we may as well reduce
|
||||
* to just "true". We do not try to eliminate redundant subclauses
|
||||
* to just "true". We do not try to eliminate redundant subclauses
|
||||
* because (a) it's not as likely as in the AND case, and (b) we might
|
||||
* well be working with hundreds or even thousands of OR conditions,
|
||||
* perhaps from a long IN list. The performance of list_append_unique
|
||||
@ -250,7 +250,7 @@ make_restrictinfo_from_bitmapqual(Path *bitmapqual,
|
||||
* We know that the index predicate must have been implied by
|
||||
* the query condition as a whole, but it may or may not be
|
||||
* implied by the conditions that got pushed into the
|
||||
* bitmapqual. Avoid generating redundant conditions.
|
||||
* bitmapqual. Avoid generating redundant conditions.
|
||||
*/
|
||||
if (!predicate_implied_by(list_make1(pred), result))
|
||||
result = lappend(result,
|
||||
@ -397,7 +397,7 @@ make_restrictinfo_internal(Expr *clause,
|
||||
|
||||
/*
|
||||
* Fill in all the cacheable fields with "not yet set" markers. None of
|
||||
* these will be computed until/unless needed. Note in particular that we
|
||||
* these will be computed until/unless needed. Note in particular that we
|
||||
* don't mark a binary opclause as mergejoinable or hashjoinable here;
|
||||
* that happens only if it appears in the right context (top level of a
|
||||
* joinclause list).
|
||||
|
@ -26,7 +26,7 @@
|
||||
/*
|
||||
* tlist_member
|
||||
* Finds the (first) member of the given tlist whose expression is
|
||||
* equal() to the given expression. Result is NULL if no such member.
|
||||
* equal() to the given expression. Result is NULL if no such member.
|
||||
*/
|
||||
TargetEntry *
|
||||
tlist_member(Node *node, List *targetlist)
|
||||
|
@ -165,7 +165,7 @@ pull_varnos_walker(Node *node, pull_varnos_context *context)
|
||||
* lower than that if it references only a subset of the rels in its
|
||||
* syntactic scope. It might also contain lateral references, but we
|
||||
* should ignore such references when computing the set of varnos in
|
||||
* an expression tree. Also, if the PHV contains no variables within
|
||||
* an expression tree. Also, if the PHV contains no variables within
|
||||
* its syntactic scope, it will be forced to be evaluated exactly at
|
||||
* the syntactic scope, so take that as the relid set.
|
||||
*/
|
||||
@ -364,7 +364,7 @@ contain_var_clause_walker(Node *node, void *context)
|
||||
*
|
||||
* Returns true if any such Var found.
|
||||
*
|
||||
* Will recurse into sublinks. Also, may be invoked directly on a Query.
|
||||
* Will recurse into sublinks. Also, may be invoked directly on a Query.
|
||||
*/
|
||||
bool
|
||||
contain_vars_of_level(Node *node, int levelsup)
|
||||
@ -424,10 +424,10 @@ contain_vars_of_level_walker(Node *node, int *sublevels_up)
|
||||
* Find the parse location of any Var of the specified query level.
|
||||
*
|
||||
* Returns -1 if no such Var is in the querytree, or if they all have
|
||||
* unknown parse location. (The former case is probably caller error,
|
||||
* unknown parse location. (The former case is probably caller error,
|
||||
* but we don't bother to distinguish it from the latter case.)
|
||||
*
|
||||
* Will recurse into sublinks. Also, may be invoked directly on a Query.
|
||||
* Will recurse into sublinks. Also, may be invoked directly on a Query.
|
||||
*
|
||||
* Note: it might seem appropriate to merge this functionality into
|
||||
* contain_vars_of_level, but that would complicate that function's API.
|
||||
@ -514,7 +514,7 @@ locate_var_of_level_walker(Node *node,
|
||||
* Upper-level vars (with varlevelsup > 0) should not be seen here,
|
||||
* likewise for upper-level Aggrefs and PlaceHolderVars.
|
||||
*
|
||||
* Returns list of nodes found. Note the nodes themselves are not
|
||||
* Returns list of nodes found. Note the nodes themselves are not
|
||||
* copied, only referenced.
|
||||
*
|
||||
* Does not examine subqueries, therefore must only be used after reduction
|
||||
@ -591,7 +591,7 @@ pull_var_clause_walker(Node *node, pull_var_clause_context *context)
|
||||
* flatten_join_alias_vars
|
||||
* Replace Vars that reference JOIN outputs with references to the original
|
||||
* relation variables instead. This allows quals involving such vars to be
|
||||
* pushed down. Whole-row Vars that reference JOIN relations are expanded
|
||||
* pushed down. Whole-row Vars that reference JOIN relations are expanded
|
||||
* into RowExpr constructs that name the individual output Vars. This
|
||||
* is necessary since we will not scan the JOIN as a base relation, which
|
||||
* is the only way that the executor can directly handle whole-row Vars.
|
||||
@ -603,7 +603,7 @@ pull_var_clause_walker(Node *node, pull_var_clause_context *context)
|
||||
* entries might now be arbitrary expressions, not just Vars. This affects
|
||||
* this function in one important way: we might find ourselves inserting
|
||||
* SubLink expressions into subqueries, and we must make sure that their
|
||||
* Query.hasSubLinks fields get set to TRUE if so. If there are any
|
||||
* Query.hasSubLinks fields get set to TRUE if so. If there are any
|
||||
* SubLinks in the join alias lists, the outer Query should already have
|
||||
* hasSubLinks = TRUE, so this is only relevant to un-flattened subqueries.
|
||||
*
|
||||
|
Reference in New Issue
Block a user