1
0
mirror of https://github.com/postgres/postgres.git synced 2025-05-21 15:54:08 +03:00

270 Commits

Author SHA1 Message Date
Richard Guo
bc5a08af3c Avoid NullTest deduction for clone clauses
In commit b262ad440, we introduced an optimization that reduces an IS
NOT NULL qual on a column defined as NOT NULL to constant true, and an
IS NULL qual on a NOT NULL column to constant false, provided we can
prove that the input expression of the NullTest is not nullable by any
outer join.  This deduction happens after we have generated multiple
clones of the same qual condition to cope with commuted-left-join
cases.

However, performing the NullTest deduction for clone clauses can be
unsafe, because we don't have a reliable way to determine if the input
expression of a NullTest is non-nullable: nullingrel bits in clone
clauses may not reflect reality, so we dare not draw conclusions from
clones about whether Vars are guaranteed not-null.

To fix, we check whether the given RestrictInfo is a clone clause in
restriction_is_always_true and restriction_is_always_false, and avoid
performing any reduction if it is.

There are several ensuing plan changes in predicate.out, and we have
to modify the tests to ensure that they continue to test what they are
intended to.  Additionally, this fix causes the test case added in
f00ab1fd1 to no longer trigger the bug that commit fixed, so we also
remove that test case.

Back-patch to v17 where this bug crept in.

Reported-by: Ronald Cruz <cruz@rentec.com>
Diagnosed-by: Tom Lane <tgl@sss.pgh.pa.us>
Author: Richard Guo <guofenglinux@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/f5320d3d-77af-4ce8-b9c3-4715ff33f213@rentec.com
Backpatch-through: 17
2025-03-04 16:17:19 +09:00
Richard Guo
78b1c553bb Fix inconsistent RestrictInfo serial numbers
When we generate multiple clones of the same qual condition to cope
with outer join identity 3, we need to ensure that all the clones get
the same serial number.  To achieve this, we reset the
root->last_rinfo_serial counter each time we produce RestrictInfo(s)
from the qual list (see deconstruct_distribute_oj_quals).  This
approach works only if we ensure that we are not changing the qual
list in any way that'd affect the number of RestrictInfos built from
it.

However, with b262ad440, an IS NULL qual on a NOT NULL column might
result in an additional constant-FALSE RestrictInfo.  And different
versions of the same qual clause can lead to different conclusions
about whether it can be reduced to constant-FALSE.  This would affect
the number of RestrictInfos built from the qual list for different
versions, causing inconsistent RestrictInfo serial numbers across
multiple clones of the same qual.  This inconsistency can confuse
users of these serial numbers, such as rebuild_joinclause_attr_needed,
and lead to planner errors such as "ERROR:  variable not found in
subplan target lists".

To fix, reset the root->last_rinfo_serial counter after generating the
additional constant-FALSE RestrictInfo.

Back-patch to v17 where the issue crept in.  In v17, I failed to make
a test case that would expose this bug, so no test case for v17.

Author: Richard Guo
Discussion: https://postgr.es/m/CAMbWs4-B6kafn+LmPuh-TYFwFyEm-vVj3Qqv7Yo-69CEv14rRg@mail.gmail.com
2024-11-08 11:24:26 +09:00
David Rowley
b9ecefecc7 Fix recently introduced typo in code comment
Reported-by: Richard Guo
Discussion: https://postgr.es/m/CAMbWs49kAsZUsj7-0SBLvE9+uKz0RCqMEmM3NVytc1YvS8sTrQ@mail.gmail.com
2024-04-12 23:15:52 +12:00
David Rowley
3af7040985 Fix IS [NOT] NULL qual optimization for inheritance tables
b262ad440 added code to have the planner remove redundant IS NOT NULL
quals and eliminate needless scans for IS NULL quals on tables where the
qual's column has a NOT NULL constraint.

That commit failed to consider that an inheritance parent table could
have differing NOT NULL constraints between the parent and the child.
This caused issues as if we eliminated a qual on the parent, when
applying the quals to child tables in apply_child_basequals(), the qual
might not have been added to the parent's baserestrictinfo.

Here we fix this by not applying the optimization to remove redundant
quals to RelOptInfos belonging to inheritance parents and applying the
optimization again in apply_child_basequals().  Effectively, this means
that the parent and child are considered independently as the parent has
both an inh=true and inh=false RTE and we still apply the optimization
to the RelOptInfo corresponding to the inh=false RTE.

We're able to still apply the optimization in add_base_clause_to_rel()
for partitioned tables as the NULLability of partitions must match that
of their parent.  And, if we ever expand restriction_is_always_false()
and restriction_is_always_true() to handle partition constraints then we
can apply the same logic as, even in multi-level partitioned tables,
there's no way to route values to a partition when the qual does not
match the partition qual of the partitioned table's parent partition.
The same is true for CHECK constraints as those must also match between
arent partitioned tables and their partitions.

Author: Richard Guo, David Rowley
Discussion: https://postgr.es/m/CAMbWs4930gQSZmjR7aANzEapdy61gCg6z8dT-kAEYD0sYWKPdQ@mail.gmail.com
2024-04-12 20:07:53 +12:00
Peter Eisentraut
dbbca2cf29 Remove unused #include's from backend .c files
as determined by include-what-you-use (IWYU)

While IWYU also suggests to *add* a bunch of #include's (which is its
main purpose), this patch does not do that.  In some cases, a more
specific #include replaces another less specific one.

Some manual adjustments of the automatic result:

- IWYU currently doesn't know about includes that provide global
  variable declarations (like -Wmissing-variable-declarations), so
  those includes are being kept manually.

- All includes for port(ability) headers are being kept for now, to
  play it safe.

- No changes of catalog/pg_foo.h to catalog/pg_foo_d.h, to keep the
  patch from exploding in size.

Note that this patch touches just *.c files, so nothing declared in
header files changes in hidden ways.

As a small example, in src/backend/access/transam/rmgr.c, some IWYU
pragma annotations are added to handle a special case there.

Discussion: https://www.postgresql.org/message-id/flat/af837490-6b2f-46df-ba05-37ea6a6653fc%40eisentraut.org
2024-03-04 12:02:20 +01:00
David Rowley
b262ad440e Add better handling of redundant IS [NOT] NULL quals
Until now PostgreSQL has not been very smart about optimizing away IS
NOT NULL base quals on columns defined as NOT NULL.  The evaluation of
these needless quals adds overhead.  Ordinarily, anyone who came
complaining about that would likely just have been told to not include
the qual in their query if it's not required.  However, a recent bug
report indicates this might not always be possible.

Bug 17540 highlighted that when we optimize Min/Max aggregates the IS NOT
NULL qual that the planner adds to make the rewritten plan ignore NULLs
can cause issues with poor index choice.  That particular case
demonstrated that other quals, especially ones where no statistics are
available to allow the planner a chance at estimating an approximate
selectivity for can result in poor index choice due to cheap startup paths
being prefered with LIMIT 1.

Here we take generic approach to fixing this by having the planner check
for NOT NULL columns and just have the planner remove these quals (when
they're not needed) for all queries, not just when optimizing Min/Max
aggregates.

Additionally, here we also detect IS NULL quals on a NOT NULL column and
transform that into a gating qual so that we don't have to perform the
scan at all.  This also works for join relations when the Var is not
nullable by any outer join.

This also helps with the self-join removal work as it must replace
strict join quals with IS NOT NULL quals to ensure equivalence with the
original query.

Author: David Rowley, Richard Guo, Andy Fan
Reviewed-by: Richard Guo, David Rowley
Discussion: https://postgr.es/m/CAApHDvqg6XZDhYRPz0zgOcevSMo0d3vxA9DvHrZtKfqO30WTnw@mail.gmail.com
Discussion: https://postgr.es/m/17540-7aa1855ad5ec18b4%40postgresql.org
2024-01-23 18:09:18 +13:00
Bruce Momjian
29275b1d17 Update copyright for 2024
Reported-by: Michael Paquier

Discussion: https://postgr.es/m/ZZKTDPxBBMt3C0J9@paquier.xyz

Backpatch-through: 12
2024-01-03 20:49:05 -05:00
Robert Haas
0d9937d118 Fix typos in comments and in one isolation test.
Dagfinn Ilmari Mannsåker, reviewed by Shubham Khanna. Some subtractions
by me.

Discussion: http://postgr.es/m/87le9fmi01.fsf@wibble.ilmari.org
2024-01-02 12:05:41 -05:00
David Rowley
930d2b442f Don't use bms_membership() in cases where we don't need to
00b41463c adjusted Bitmapset so that an empty set is always represented
as NULL.  This makes checking for empty sets far cheaper than it used
to be.

There were various places in the code where we'd call bms_membership()
to handle the 3 possible BMS_Membership values.  For the BMS_SINGLETON
case, we'd also call bms_singleton_member() to find the single set member.
This can now be done in a more optimal way by first checking if the set is
NULL and then not bothering with bms_membership() and simply call
bms_get_singleton_member() instead to find the single member.  This
function will return false if there are multiple members in the set.

Here we also tidy up some logic in examine_variable() for the single
member case.  There's now no need to call bms_is_member() as we've
already established that we're working with a singleton Bitmapset, so we
can just check if varRelid matches the singleton member.

Reviewed-by: Richard Guo
Discussion: https://postgr.es/m/CAApHDvqW+CxNPcY245GaWiuqkkqgTudtG2ncGvvSjGn2wdTZLA@mail.gmail.com
2023-11-28 10:41:12 +13:00
Tom Lane
efeb12ef0b Don't include outer join relids in lateral_relids bitmapsets.
This avoids an assertion failure when outer joins are rearranged
per identity 3.  Listing only the baserels from a PlaceHolderVar's
ph_lateral set should be enough to ensure that the required values
are available when we need to compute the PHV --- it's what we
did before inventing nullingrel sets, after all.  It's a bit
unsatisfying; but with beta2 hard upon us, there's not time to
look for an aesthetically cleaner fix.

Richard Guo and Tom Lane

Discussion: https://postgr.es/m/CAMbWs48Jcw-NvnxT23WiHP324wG44DvzcH1j4hc0Zn+3sR9cfg@mail.gmail.com
2023-06-20 10:29:57 -04:00
Tom Lane
991a3df227 Fix filtering of "cloned" outer-join quals some more.
We've had multiple issues with the clause_is_computable_at logic that
I introduced in 2489d76c4: it's been known to accept more than one
clone of the same qual at the same plan node, and also to accept no
clones at all.  It's looking impractical to get it 100% right on the
basis of the currently-stored information, so fix it by introducing a
new RestrictInfo field "incompatible_relids" that explicitly shows
which outer joins a given clone mustn't be pushed above.

In principle we could populate this field in every RestrictInfo, but
that would cost space and there doesn't presently seem to be a need
for it in general.  Also, while deconstruct_distribute_oj_quals can
easily fill the field with the remaining members of the commutative
join set that it's considering, computing it in the general case
seems again pretty complicated.  So for now, just fill it for
clone quals.

Along the way, fix a bug that may or may not be only latent:
equivclass.c was generating replacement clauses with is_pushed_down
and has_clone/is_clone markings that didn't match their
required_relids.  This led me to conclude that leaving the clone flags
out of make_restrictinfo's purview wasn't such a great idea after all,
so add them.

Per report from Richard Guo.

Discussion: https://postgr.es/m/CAMbWs48EYi_9-pSd0ORes1kTmTeAjT4Q3gu49hJtYCbSn2JyeA@mail.gmail.com
2023-05-25 10:28:33 -04:00
Tom Lane
9df8f903eb Fix some issues with improper placement of outer join clauses.
After applying outer-join identity 3 in the forward direction,
it was possible for the planner to mistakenly apply a qual clause
from above the two outer joins at the now-lower join level.
This can give the wrong answer, since a value that would get nulled
by the now-upper join might not yet be null.

To fix, when we perform such a transformation, consider that the
now-lower join hasn't really completed the outer join it's nominally
responsible for and thus its relid set should not include that OJ's
relid (nor should its output Vars have that nullingrel bit set).
Instead we add those bits when the now-upper join is performed.
The existing rules for qual placement then suffice to prevent
higher qual clauses from dropping below the now-upper join.
There are a few complications from needing to consider transitive
closures in case multiple pushdowns have happened, but all in all
it's not a very complex patch.

This is all new logic (from 2489d76c4) so no need to back-patch.
The added test cases all have the same results as in v15.

Tom Lane and Richard Guo

Discussion: https://postgr.es/m/0b819232-4b50-f245-1c7d-c8c61bf41827@postgrespro.ru
2023-05-17 11:14:04 -04:00
David Rowley
3f58a4e296 Fix various typos and incorrect/outdated name references
Author: Alexander Lakhin
Discussion: https://postgr.es/m/699beab4-a6ca-92c9-f152-f559caf6dc25@gmail.com
2023-04-19 13:50:33 +12:00
Tom Lane
6b661b01f4 Remove local optimizations of empty Bitmapsets into null pointers.
These are all dead code now that it's done centrally.

Patch by me; thanks to Nathan Bossart and Richard Guo for review.

Discussion: https://postgr.es/m/1159933.1677621588@sss.pgh.pa.us
2023-03-02 12:01:47 -05:00
Tom Lane
a75ff55c83 Fix some issues with wrong placement of pseudo-constant quals.
initsplan.c figured that it could push Var-free qual clauses to
the top of the current JoinDomain, which is okay in the abstract.
But if the current domain is inside some outer join, and we later
commute an inside-the-domain outer join with one outside it,
we end up placing the pushed-up qual clause incorrectly.

In distribute_qual_to_rels, avoid this by using the syntactic scope
of the qual clause; with the exception that if we're in the top-level
join domain we can still use the full query relid set, ensuring the
resulting gating Result node goes to the top of the plan.  (This is
approximately as smart as the pre-v16 code was.  Perhaps we can do
better later, but it's not clear that such cases are worth a lot of
sweat.)

In process_implied_equality, we don't have a clear notion of syntactic
scope, but we do have the results of SpecialJoinInfo construction.
Thumb through those and remove any lower outer joins that might get
commuted to above the join domain.  Again, we can make an exception
for the top-level join domain.  It'd be possible to work harder here
(for example, by keeping outer joins that aren't shown as potentially
commutable), but I'm going to stop here for the moment.  This issue
has convinced me that the current representation of join domains
probably needs further refinement, so I'm disinclined to write
inessential dependent logic just yet.

In passing, tighten the qualscope passed to process_implied_equality
by generate_base_implied_equalities_no_const; there's no need for
it to be larger than the rel we are currently considering.

Tom Lane and Richard Guo, per report from Tender Wang.

Discussion: https://postgr.es/m/CAHewXNk9eJ35ru5xATWioTV4+xZPHptjy9etdcNPjUfY9RQ+uQ@mail.gmail.com
2023-02-22 12:39:11 -05:00
Tom Lane
e9a20e451f When removing a relation from the query, drop its RelOptInfo.
In commit b78f6264e I opined that it was "too risky" to delete a
relation's RelOptInfo from the planner's data structures when we have
realized that we don't need to join to it; so instead we just marked
it as a dead relation.  In hindsight that judgment seems flawed: any
subsequent access to such a dead relation is arguably a bug in
itself, so leaving the RelOptInfo present just helps to mask bugs.
Let's delete it instead, allowing removal of the whole notion of a
"dead relation".  So far as the regression tests can find, this
requires no other code changes, except for one Assert in equivclass.c
that was very dubiously not complaining about access to a dead rel.

Discussion: https://postgr.es/m/229905.1676062220@sss.pgh.pa.us
2023-02-13 13:35:38 -05:00
Tom Lane
acc5821e4d Further fixes in qual nullingrel adjustment for outer join commutation.
One of the add_nulling_relids calls in deconstruct_distribute_oj_quals
added an OJ relid to too few Vars, while the other added it to too
many.  We should consider the syntactic structure not
min_left/righthand while deciding which Vars to decorate, and when
considering pushing up a lower outer join pursuant to transforming the
second form of OJ identity 3 to the first form, we only want to
decorate Vars coming from its LHS.

In a related bug, I realized that make_outerjoininfo was failing to
check a very basic property that's needed to apply OJ identity 3:
the syntactically-upper outer join clause can't refer to the lower
join's LHS.  This didn't break the join order restriction logic,
but it led to setting bogus commute_xxx bits, possibly resulting
in bogus nullingrel markings in modified quals.

Richard Guo and Tom Lane

Discussion: https://postgr.es/m/CAMbWs497CmBruMx1SOjepWEz+T5NWa4scqbdE9v7ZzSXqH_gQw@mail.gmail.com
Discussion: https://postgr.es/m/CAEP4nAx9C5gXNBfEA0JBfz7B+5f1Bawt-RWQWyhev-wdps8BZA@mail.gmail.com
2023-02-10 13:31:00 -05:00
Tom Lane
2cbbffff05 Remove leftover code in deconstruct_distribute_oj_quals().
The initial "put back OJ relids" adjustment of ojscope was
incorrect and unnecessary; it seems to be a leftover from
when I (tgl) was trying to get this function to work at all.

Richard Guo

Discussion: https://postgr.es/m/CAMbWs4-L2C47ZGZPabBAi5oDZsKmsbvhYcGCy5o=gCjsaG_ZQA@mail.gmail.com
2023-02-07 11:56:43 -05:00
Tom Lane
b2d0e13a0a Fix over-optimistic updating of info about commutable outer joins.
make_outerjoininfo was set up to update SpecialJoinInfo's
commute_below, commute_above_l, commute_above_r fields as soon as
it found a pair of outer joins that look like they can commute.
However, this decision could be negated later in the same loop due
to finding an intermediate outer join that prevents commutation.
That left us with commute_xxx fields that were contradictory to the
join order restrictions expressed in min_lefthand/min_righthand.
The latter fields would keep us from actually choosing a bad join
order; but the inconsistent commute_xxx fields could bollix details
such as the varnullingrels values created for intermediate join
relation targetlists, ending in an assertion failure in setrefs.c.

To fix, wait till the end of make_outerjoininfo where we have
accurate values for min_lefthand/min_righthand, and then insert
only relids not present in those sets into the commute_xxx fields.

Per SQLSmith testing by Robins Tharakan.  Note that while Robins
bisected the failure to commit b448f1c8d, it's really the fault of
2489d76c4.  The outerjoin_delayed logic removed in the later commit
was keeping us from deciding that troublesome join pairs commute,
at least in the specific example seen here.

Discussion: https://postgr.es/m/CAEP4nAyAORgE8K_RHSmvWbE9UaChhjbEL1RrDU3neePwwRUB=A@mail.gmail.com
2023-02-05 14:25:10 -05:00
Tom Lane
9f452feeeb Fix thinko in qual distribution.
deconstruct_distribute tweaks the outer join scope (ojscope)
it passes to distribute_qual_to_rels when considering an outer
join qual that's above potentially-commutable outer joins.
However, if the current join is *not* potentially commutable,
we shouldn't do that.  The argument that distribute_qual_to_rels
will not do something wrong with the bogus ojscope falls flat
if we don't pass it non-null postponed_oj_qual_list.  Moreover,
there's no need to play games in this case since we aren't going
to commute anything.

Per SQLSmith testing by Robins Tharakan.

Discussion: https://postgr.es/m/CAEP4nAw74k4b-=93gmfCNX3MOY3y4uPxqbk_MnCVEpdsqHJVsg@mail.gmail.com
2023-02-04 17:40:35 -05:00
Tom Lane
8538519db1 Fix thinko in outer-join removal.
If we have a RestrictInfo that mentions both the removal-candidate
relation and the outer join's relid, then that is a pushed-down
condition not a join condition, so it should be grounds for deciding
that we can't remove the outer join.  In commit 2489d76c4, I'd blindly
included the OJ's relid into "joinrelids" as per the new standard
convention, but the checks of attr_needed and ph_needed should only
allow the join's input rels to be mentioned.

Having done that, the check for references in pushed-down quals
a few lines further down should be redundant.  I left it in place
as an Assert, though.

While researching this I happened across a couple of comments that
worried about the effects of update_placeholder_eval_levels.
That's gone as of b448f1c8d, so we can remove some worry.

Per bug #17769 from Robins Tharakan.  The submitted test case
triggers this more or less accidentally because we flatten out
a LATERAL sub-select after we've done join strength reduction;
if we did that in the other order, this problem would be masked
because the outer join would get simplified to an inner join.
To ensure that the committed test case will continue to test
what it means to even if we make that happen someday, use a
test clause involving COALESCE(), which will prevent us from
using it to do join strength reduction.

Patch by me, but thanks to Richard Guo for initial investigation.

Discussion: https://postgr.es/m/17769-e4f7a5c9d84a80a7@postgresql.org
2023-02-04 15:19:54 -05:00
Tom Lane
5840c20272 Rethink treatment of "postponed" quals in deconstruct_jointree().
After pulling up LATERAL subqueries, we may have qual clauses that
refer to relations outside their syntactic scope.  Before doing any
such pullup, prepjointree.c checks to make sure that it wouldn't
create a semantically-invalid situation; but we leave it to
deconstruct_jointree() to actually move these quals up the join
tree to a place where they can be evaluated.  In commit 2489d76c4,
I (tgl) refactored deconstruct_jointree() in a way that caused
assertion failures while moving such quals, because the new logic
failed to distinguish "this jointree node is a parent of the source
one" from "this jointree node is processed after the source
one in depth-first order".

Fix this, and at the same time reduce the overhead a bit, by
getting rid of the common PostponedQual list and instead making each
JoinTreeItem contain a list of quals that needed to be postponed to
its level.  We can help distribute_qual_to_rels find the appropriate
JoinTreeItem efficiently by adding parent-item links to the
JoinTreeItem data structure.  This ends up being the same number
of relid subset checks as the original (pre-bug) logic, but less
list manipulation is required during multi-level postponements.

Richard Guo and Tom Lane, per bug #17768 from Robins Tharakan.

Discussion: https://postgr.es/m/17768-5ac8730ece54478f@postgresql.org
2023-02-04 12:45:53 -05:00
Tom Lane
3bef56e116 Invent "join domains" to replace the below_outer_join hack.
EquivalenceClasses are now understood as applying within a "join
domain", which is a set of inner-joined relations (possibly underneath
an outer join).  We no longer need to treat an EC from below an outer
join as a second-class citizen.

I have hopes of eventually being able to treat outer-join clauses via
EquivalenceClasses, by means of only applying deductions within the
EC's join domain.  There are still problems in the way of that, though,
so for now the reconsider_outer_join_clause logic is still here.

I haven't been able to get rid of RestrictInfo.is_pushed_down either,
but I wonder if that could be recast using JoinDomains.

I had to hack one test case in postgres_fdw.sql to make it still test
what it was meant to, because postgres_fdw is inconsistent about
how it deals with quals containing non-shippable expressions; see
https://postgr.es/m/1691374.1671659838@sss.pgh.pa.us.  That should
be improved, but I don't think it's within the scope of this patch
series.

Patch by me; thanks to Richard Guo for review.

Discussion: https://postgr.es/m/830269.1656693747@sss.pgh.pa.us
2023-01-30 13:50:25 -05:00
Tom Lane
b448f1c8d8 Do assorted mop-up in the planner.
Remove RestrictInfo.nullable_relids, along with a good deal of
infrastructure that calculated it.  One use-case for it was in
join_clause_is_movable_to, but we can now replace that usage with
a check to see if the clause's relids include any outer join
that can null the target relation.  The other use-case was in
join_clause_is_movable_into, but that test can just be dropped
entirely now that the clause's relids include outer joins.
Furthermore, join_clause_is_movable_into should now be
accurate enough that it will accept anything returned by
generate_join_implied_equalities, so we can restore the Assert
that was diked out in commit 95f4e59c3.

Remove the outerjoin_delayed mechanism.  We needed this before to
prevent quals from getting evaluated below outer joins that should
null some of their vars.  Now that we consider varnullingrels while
placing quals, that's taken care of automatically, so throw the
whole thing away.

Teach remove_useless_result_rtes to also remove useless FromExprs.
Having done that, the delay_upper_joins flag serves no purpose any
more and we can remove it, largely reverting 11086f2f2.

Use constant TRUE for "dummy" clauses when throwing back outer joins.
This improves on a hack I introduced in commit 6a6522529.  If we
have a left-join clause l.x = r.y, and a WHERE clause l.x = constant,
we generate r.y = constant and then don't really have a need for the
join clause.  But we must throw the join clause back anyway after
marking it redundant, so that the join search heuristics won't think
this is a clauseless join and avoid it.  That was a kluge introduced
under time pressure, and after looking at it I thought of a better
way: let's just introduce constant-TRUE "join clauses" instead,
and get rid of them at the end.  This improves the generated plans for
such cases by not having to test a redundant join clause.  We can also
get rid of the ugly hack used to mark such clauses as redundant for
selectivity estimation.

Patch by me; thanks to Richard Guo for review.

Discussion: https://postgr.es/m/830269.1656693747@sss.pgh.pa.us
2023-01-30 13:44:36 -05:00
Tom Lane
2489d76c49 Make Vars be outer-join-aware.
Traditionally we used the same Var struct to represent the value
of a table column everywhere in parse and plan trees.  This choice
predates our support for SQL outer joins, and it's really a pretty
bad idea with outer joins, because the Var's value can depend on
where it is in the tree: it might go to NULL above an outer join.
So expression nodes that are equal() per equalfuncs.c might not
represent the same value, which is a huge correctness hazard for
the planner.

To improve this, decorate Var nodes with a bitmapset showing
which outer joins (identified by RTE indexes) may have nulled
them at the point in the parse tree where the Var appears.
This allows us to trust that equal() Vars represent the same value.
A certain amount of klugery is still needed to cope with cases
where we re-order two outer joins, but it's possible to make it
work without sacrificing that core principle.  PlaceHolderVars
receive similar decoration for the same reason.

In the planner, we include these outer join bitmapsets into the relids
that an expression is considered to depend on, and in consequence also
add outer-join relids to the relids of join RelOptInfos.  This allows
us to correctly perceive whether an expression can be calculated above
or below a particular outer join.

This change affects FDWs that want to plan foreign joins.  They *must*
follow suit when labeling foreign joins in order to match with the
core planner, but for many purposes (if postgres_fdw is any guide)
they'd prefer to consider only base relations within the join.
To support both requirements, redefine ForeignScan.fs_relids as
base+OJ relids, and add a new field fs_base_relids that's set up by
the core planner.

Large though it is, this commit just does the minimum necessary to
install the new mechanisms and get check-world passing again.
Follow-up patches will perform some cleanup.  (The README additions
and comments mention some stuff that will appear in the follow-up.)

Patch by me; thanks to Richard Guo for review.

Discussion: https://postgr.es/m/830269.1656693747@sss.pgh.pa.us
2023-01-30 13:16:20 -05:00
Bruce Momjian
c8e1ba736b Update copyright for 2023
Backpatch-through: 11
2023-01-02 15:00:37 -05:00
Tom Lane
b3ff6c742f Use an explicit state flag to control PlaceHolderInfo creation.
Up to now, callers of find_placeholder_info() were required to pass
a flag indicating if it's OK to make a new PlaceHolderInfo.  That'd
be fine if the callers had free choice, but they do not.  Once we
begin deconstruct_jointree() it's no longer OK to make more PHIs;
while callers before that always want to create a PHI if it's not
there already.  So there's no freedom of action, only the opportunity
to cause bugs by creating PHIs too late.  Let's get rid of that in
favor of adding a state flag PlannerInfo.placeholdersFrozen, which
we can set at the point where it's no longer OK to make more PHIs.

This patch also simplifies a couple of call sites that were using
complicated logic to avoid calling find_placeholder_info() as much
as possible.  Now that that lookup is O(1) thanks to the previous
commit, the extra bitmap manipulations are probably a net negative.

Discussion: https://postgr.es/m/1405792.1660677844@sss.pgh.pa.us
2022-08-17 15:52:53 -04:00
Bruce Momjian
27b77ecf9f Update copyright for 2022
Backpatch-through: 10
2022-01-07 19:04:57 -05:00
David Rowley
39a3105678 Fix incorrect hash equality operator bug in Memoize
In v14, because we don't have a field in RestrictInfo to cache both the
left and right type's hash equality operator, we just restrict the scope
of Memoize to only when the left and right types of a RestrictInfo are the
same.

In master we add another field to RestrictInfo and cache both hash
equality operators.

Reported-by: Jaime Casanova
Author: David Rowley
Discussion: https://postgr.es/m/20210929185544.GB24346%40ahch-to
Backpatch-through: 14
2021-11-08 14:40:33 +13:00
David Rowley
83f4fcc655 Change the name of the Result Cache node to Memoize
"Result Cache" was never a great name for this node, but nobody managed
to come up with another name that anyone liked enough.  That was until
David Johnston mentioned "Node Memoization", which Tom Lane revised to
just "Memoize".  People seem to like "Memoize", so let's do the rename.

Reviewed-by: Justin Pryzby
Discussion: https://postgr.es/m/20210708165145.GG1176@momjian.us
Backpatch-through: 14, where Result Cache was introduced
2021-07-14 12:43:58 +12:00
David Rowley
9eacee2e62 Add Result Cache executor node (take 2)
Here we add a new executor node type named "Result Cache".  The planner
can include this node type in the plan to have the executor cache the
results from the inner side of parameterized nested loop joins.  This
allows caching of tuples for sets of parameters so that in the event that
the node sees the same parameter values again, it can just return the
cached tuples instead of rescanning the inner side of the join all over
again.  Internally, result cache uses a hash table in order to quickly
find tuples that have been previously cached.

For certain data sets, this can significantly improve the performance of
joins.  The best cases for using this new node type are for join problems
where a large portion of the tuples from the inner side of the join have
no join partner on the outer side of the join.  In such cases, hash join
would have to hash values that are never looked up, thus bloating the hash
table and possibly causing it to multi-batch.  Merge joins would have to
skip over all of the unmatched rows.  If we use a nested loop join with a
result cache, then we only cache tuples that have at least one join
partner on the outer side of the join.  The benefits of using a
parameterized nested loop with a result cache increase when there are
fewer distinct values being looked up and the number of lookups of each
value is large.  Also, hash probes to lookup the cache can be much faster
than the hash probe in a hash join as it's common that the result cache's
hash table is much smaller than the hash join's due to result cache only
caching useful tuples rather than all tuples from the inner side of the
join.  This variation in hash probe performance is more significant when
the hash join's hash table no longer fits into the CPU's L3 cache, but the
result cache's hash table does.  The apparent "random" access of hash
buckets with each hash probe can cause a poor L3 cache hit ratio for large
hash tables.  Smaller hash tables generally perform better.

The hash table used for the cache limits itself to not exceeding work_mem
* hash_mem_multiplier in size.  We maintain a dlist of keys for this cache
and when we're adding new tuples and realize we've exceeded the memory
budget, we evict cache entries starting with the least recently used ones
until we have enough memory to add the new tuples to the cache.

For parameterized nested loop joins, we now consider using one of these
result cache nodes in between the nested loop node and its inner node.  We
determine when this might be useful based on cost, which is primarily
driven off of what the expected cache hit ratio will be.  Estimating the
cache hit ratio relies on having good distinct estimates on the nested
loop's parameters.

For now, the planner will only consider using a result cache for
parameterized nested loop joins.  This works for both normal joins and
also for LATERAL type joins to subqueries.  It is possible to use this new
node for other uses in the future.  For example, to cache results from
correlated subqueries.  However, that's not done here due to some
difficulties obtaining a distinct estimation on the outer plan to
calculate the estimated cache hit ratio.  Currently we plan the inner plan
before planning the outer plan so there is no good way to know if a result
cache would be useful or not since we can't estimate the number of times
the subplan will be called until the outer plan is generated.

The functionality being added here is newly introducing a dependency on
the return value of estimate_num_groups() during the join search.
Previously, during the join search, we only ever needed to perform
selectivity estimations.  With this commit, we need to use
estimate_num_groups() in order to estimate what the hit ratio on the
result cache will be.   In simple terms, if we expect 10 distinct values
and we expect 1000 outer rows, then we'll estimate the hit ratio to be
99%.  Since cache hits are very cheap compared to scanning the underlying
nodes on the inner side of the nested loop join, then this will
significantly reduce the planner's cost for the join.   However, it's
fairly easy to see here that things will go bad when estimate_num_groups()
incorrectly returns a value that's significantly lower than the actual
number of distinct values.  If this happens then that may cause us to make
use of a nested loop join with a result cache instead of some other join
type, such as a merge or hash join.  Our distinct estimations have been
known to be a source of trouble in the past, so the extra reliance on them
here could cause the planner to choose slower plans than it did previous
to having this feature.  Distinct estimations are also fairly hard to
estimate accurately when several tables have been joined already or when a
WHERE clause filters out a set of values that are correlated to the
expressions we're estimating the number of distinct value for.

For now, the costing we perform during query planning for result caches
does put quite a bit of faith in the distinct estimations being accurate.
When these are accurate then we should generally see faster execution
times for plans containing a result cache.  However, in the real world, we
may find that we need to either change the costings to put less trust in
the distinct estimations being accurate or perhaps even disable this
feature by default.  There's always an element of risk when we teach the
query planner to do new tricks that it decides to use that new trick at
the wrong time and causes a regression.  Users may opt to get the old
behavior by turning the feature off using the enable_resultcache GUC.
Currently, this is enabled by default.  It remains to be seen if we'll
maintain that setting for the release.

Additionally, the name "Result Cache" is the best name I could think of
for this new node at the time I started writing the patch.  Nobody seems
to strongly dislike the name. A few people did suggest other names but no
other name seemed to dominate in the brief discussion that there was about
names. Let's allow the beta period to see if the current name pleases
enough people.  If there's some consensus on a better name, then we can
change it before the release.  Please see the 2nd discussion link below
for the discussion on the "Result Cache" name.

Author: David Rowley
Reviewed-by: Andy Fan, Justin Pryzby, Zhihong Yu, Hou Zhijie
Tested-By: Konstantin Knizhnik
Discussion: https://postgr.es/m/CAApHDvrPcQyQdWERGYWx8J%2B2DLUNgXu%2BfOSbQ1UscxrunyXyrQ%40mail.gmail.com
Discussion: https://postgr.es/m/CAApHDvq=yQXr5kqhRviT2RhNKwToaWr9JAN5t+5_PzhuRJ3wvg@mail.gmail.com
2021-04-02 14:10:56 +13:00
David Rowley
28b3e3905c Revert b6002a796
This removes "Add Result Cache executor node".  It seems that something
weird is going on with the tracking of cache hits and misses as
highlighted by many buildfarm animals.  It's not yet clear what the
problem is as other parts of the plan indicate that the cache did work
correctly, it's just the hits and misses that were being reported as 0.

This is especially a bad time to have the buildfarm so broken, so
reverting before too many more animals go red.

Discussion: https://postgr.es/m/CAApHDvq_hydhfovm4=izgWs+C5HqEeRScjMbOgbpC-jRAeK3Yw@mail.gmail.com
2021-04-01 13:33:23 +13:00
David Rowley
b6002a796d Add Result Cache executor node
Here we add a new executor node type named "Result Cache".  The planner
can include this node type in the plan to have the executor cache the
results from the inner side of parameterized nested loop joins.  This
allows caching of tuples for sets of parameters so that in the event that
the node sees the same parameter values again, it can just return the
cached tuples instead of rescanning the inner side of the join all over
again.  Internally, result cache uses a hash table in order to quickly
find tuples that have been previously cached.

For certain data sets, this can significantly improve the performance of
joins.  The best cases for using this new node type are for join problems
where a large portion of the tuples from the inner side of the join have
no join partner on the outer side of the join.  In such cases, hash join
would have to hash values that are never looked up, thus bloating the hash
table and possibly causing it to multi-batch.  Merge joins would have to
skip over all of the unmatched rows.  If we use a nested loop join with a
result cache, then we only cache tuples that have at least one join
partner on the outer side of the join.  The benefits of using a
parameterized nested loop with a result cache increase when there are
fewer distinct values being looked up and the number of lookups of each
value is large.  Also, hash probes to lookup the cache can be much faster
than the hash probe in a hash join as it's common that the result cache's
hash table is much smaller than the hash join's due to result cache only
caching useful tuples rather than all tuples from the inner side of the
join.  This variation in hash probe performance is more significant when
the hash join's hash table no longer fits into the CPU's L3 cache, but the
result cache's hash table does.  The apparent "random" access of hash
buckets with each hash probe can cause a poor L3 cache hit ratio for large
hash tables.  Smaller hash tables generally perform better.

The hash table used for the cache limits itself to not exceeding work_mem
* hash_mem_multiplier in size.  We maintain a dlist of keys for this cache
and when we're adding new tuples and realize we've exceeded the memory
budget, we evict cache entries starting with the least recently used ones
until we have enough memory to add the new tuples to the cache.

For parameterized nested loop joins, we now consider using one of these
result cache nodes in between the nested loop node and its inner node.  We
determine when this might be useful based on cost, which is primarily
driven off of what the expected cache hit ratio will be.  Estimating the
cache hit ratio relies on having good distinct estimates on the nested
loop's parameters.

For now, the planner will only consider using a result cache for
parameterized nested loop joins.  This works for both normal joins and
also for LATERAL type joins to subqueries.  It is possible to use this new
node for other uses in the future.  For example, to cache results from
correlated subqueries.  However, that's not done here due to some
difficulties obtaining a distinct estimation on the outer plan to
calculate the estimated cache hit ratio.  Currently we plan the inner plan
before planning the outer plan so there is no good way to know if a result
cache would be useful or not since we can't estimate the number of times
the subplan will be called until the outer plan is generated.

The functionality being added here is newly introducing a dependency on
the return value of estimate_num_groups() during the join search.
Previously, during the join search, we only ever needed to perform
selectivity estimations.  With this commit, we need to use
estimate_num_groups() in order to estimate what the hit ratio on the
result cache will be.   In simple terms, if we expect 10 distinct values
and we expect 1000 outer rows, then we'll estimate the hit ratio to be
99%.  Since cache hits are very cheap compared to scanning the underlying
nodes on the inner side of the nested loop join, then this will
significantly reduce the planner's cost for the join.   However, it's
fairly easy to see here that things will go bad when estimate_num_groups()
incorrectly returns a value that's significantly lower than the actual
number of distinct values.  If this happens then that may cause us to make
use of a nested loop join with a result cache instead of some other join
type, such as a merge or hash join.  Our distinct estimations have been
known to be a source of trouble in the past, so the extra reliance on them
here could cause the planner to choose slower plans than it did previous
to having this feature.  Distinct estimations are also fairly hard to
estimate accurately when several tables have been joined already or when a
WHERE clause filters out a set of values that are correlated to the
expressions we're estimating the number of distinct value for.

For now, the costing we perform during query planning for result caches
does put quite a bit of faith in the distinct estimations being accurate.
When these are accurate then we should generally see faster execution
times for plans containing a result cache.  However, in the real world, we
may find that we need to either change the costings to put less trust in
the distinct estimations being accurate or perhaps even disable this
feature by default.  There's always an element of risk when we teach the
query planner to do new tricks that it decides to use that new trick at
the wrong time and causes a regression.  Users may opt to get the old
behavior by turning the feature off using the enable_resultcache GUC.
Currently, this is enabled by default.  It remains to be seen if we'll
maintain that setting for the release.

Additionally, the name "Result Cache" is the best name I could think of
for this new node at the time I started writing the patch.  Nobody seems
to strongly dislike the name. A few people did suggest other names but no
other name seemed to dominate in the brief discussion that there was about
names. Let's allow the beta period to see if the current name pleases
enough people.  If there's some consensus on a better name, then we can
change it before the release.  Please see the 2nd discussion link below
for the discussion on the "Result Cache" name.

Author: David Rowley
Reviewed-by: Andy Fan, Justin Pryzby, Zhihong Yu
Tested-By: Konstantin Knizhnik
Discussion: https://postgr.es/m/CAApHDvrPcQyQdWERGYWx8J%2B2DLUNgXu%2BfOSbQ1UscxrunyXyrQ%40mail.gmail.com
Discussion: https://postgr.es/m/CAApHDvq=yQXr5kqhRviT2RhNKwToaWr9JAN5t+5_PzhuRJ3wvg@mail.gmail.com
2021-04-01 12:32:22 +13:00
David Rowley
f58b230ed0 Cache if PathTarget and RestrictInfos contain volatile functions
Here we aim to reduce duplicate work done by contain_volatile_functions()
by caching whether PathTargets and RestrictInfos contain any volatile
functions the first time contain_volatile_functions() is called for them.
Any future calls for these nodes just use the cached value rather than
going to the trouble of recursively checking the sub-node all over again.
Thanks to Tom Lane for the idea.

Any locations in the code which make changes to a PathTarget or
RestrictInfo which could change the outcome of the volatility check must
change the cached value back to VOLATILITY_UNKNOWN again.
contain_volatile_functions() is the only code in charge of setting the
cache value to either VOLATILITY_VOLATILE or VOLATILITY_NOVOLATILE.

Some existing code does benefit from this additional caching, however,
this change is mainly aimed at an upcoming patch that must check for
volatility during the join search.  Repeated volatility checks in that
case can become very expensive when the join search contains more than a
few relations.

Author: David Rowley
Discussion: https://postgr.es/m/3795226.1614059027@sss.pgh.pa.us
2021-03-29 14:55:26 +13:00
Tom Lane
55dc86eca7 Fix pull_varnos' miscomputation of relids set for a PlaceHolderVar.
Previously, pull_varnos() took the relids of a PlaceHolderVar as being
equal to the relids in its contents, but that fails to account for the
possibility that we have to postpone evaluation of the PHV due to outer
joins.  This could result in a malformed plan.  The known cases end up
triggering the "failed to assign all NestLoopParams to plan nodes"
sanity check in createplan.c, but other symptoms may be possible.

The right value to use is the join level we actually intend to evaluate
the PHV at.  We can get that from the ph_eval_at field of the associated
PlaceHolderInfo.  However, there are some places that call pull_varnos()
before the PlaceHolderInfos have been created; in that case, fall back
to the conservative assumption that the PHV will be evaluated at its
syntactic level.  (In principle this might result in missing some legal
optimization, but I'm not aware of any cases where it's an issue in
practice.)  Things are also a bit ticklish for calls occurring during
deconstruct_jointree(), but AFAICS the ph_eval_at fields should have
reached their final values by the time we need them.

The main problem in making this work is that pull_varnos() has no
way to get at the PlaceHolderInfos.  We can fix that easily, if a
bit tediously, in HEAD by passing it the planner "root" pointer.
In the back branches that'd cause an unacceptable API/ABI break for
extensions, so leave the existing entry points alone and add new ones
with the additional parameter.  (If an old entry point is called and
encounters a PHV, it'll fall back to using the syntactic level,
again possibly missing some valid optimization.)

Back-patch to v12.  The computation is surely also wrong before that,
but it appears that we cannot reach a bad plan thanks to join order
restrictions imposed on the subquery that the PlaceHolderVar came from.
The error only became reachable when commit 4be058fe9 allowed trivial
subqueries to be collapsed out completely, eliminating their join order
restrictions.

Per report from Stephan Springl.

Discussion: https://postgr.es/m/171041.1610849523@sss.pgh.pa.us
2021-01-21 15:37:23 -05:00
Bruce Momjian
ca3b37487b Update copyright for 2021
Backpatch-through: 9.5
2021-01-02 13:06:25 -05:00
Tom Lane
ad1c36b070 Fix foreign-key selectivity estimation in the presence of constants.
get_foreign_key_join_selectivity() looks for join clauses that equate
the two sides of the FK constraint.  However, if we have a query like
"WHERE fktab.a = pktab.a and fktab.a = 1", it won't find any such join
clause, because equivclass.c replaces the given clauses with "fktab.a
= 1 and pktab.a = 1", which can be enforced at the scan level, leaving
nothing to be done for column "a" at the join level.

We can fix that expectation without much trouble, but then a new problem
arises: applying the foreign-key-based selectivity rule produces a
rowcount underestimate, because we're effectively double-counting the
selectivity of the "fktab.a = 1" clause.  So we have to cancel that
selectivity out of the estimate.

To fix, refactor process_implied_equality() so that it can pass back the
new RestrictInfo to its callers in equivclass.c, allowing the generated
"fktab.a = 1" clause to be saved in the EquivalenceClass's ec_derives
list.  Then it's not much trouble to dig out the relevant RestrictInfo
when we need to adjust an FK selectivity estimate.  (While at it, we
can also remove the expensive use of initialize_mergeclause_eclasses()
to set up the new RestrictInfo's left_ec and right_ec pointers.
The equivclass.c code can set those basically for free.)

This seems like clearly a bug fix, but I'm hesitant to back-patch it,
first because there's some API/ABI risk for extensions and second because
we're usually loath to destabilize plan choices in stable branches.

Per report from Sigrid Ehrenreich.

Discussion: https://postgr.es/m/1019549.1603770457@sss.pgh.pa.us
Discussion: https://postgr.es/m/AM6PR02MB5287A0ADD936C1FA80973E72AB190@AM6PR02MB5287.eurprd02.prod.outlook.com
2020-10-28 11:15:47 -04:00
Bruce Momjian
7559d8ebfa Update copyrights for 2020
Backpatch-through: update all files in master, backpatch legal files through 9.4
2020-01-01 12:21:45 -05:00
Amit Kapila
14aec03502 Make the order of the header file includes consistent in backend modules.
Similar to commits 7e735035f2 and dddf4cdc33, this commit makes the order
of header file inclusion consistent for backend modules.

In the passing, removed a couple of duplicate inclusions.

Author: Vignesh C
Reviewed-by: Kuntal Ghosh and Amit Kapila
Discussion: https://postgr.es/m/CALDaNm2Sznv8RR6Ex-iJO6xAdsxgWhCoETkaYX=+9DW3q0QCfA@mail.gmail.com
2019-11-12 08:30:16 +05:30
Tom Lane
5ee190f8ec Rationalize use of list_concat + list_copy combinations.
In the wake of commit 1cff1b95a, the result of list_concat no longer
shares the ListCells of the second input.  Therefore, we can replace
"list_concat(x, list_copy(y))" with just "list_concat(x, y)".

To improve call sites that were list_copy'ing the first argument,
or both arguments, invent "list_concat_copy()" which produces a new
list sharing no ListCells with either input.  (This is a bit faster
than "list_concat(list_copy(x), y)" because it makes the result list
the right size to start with.)

In call sites that were not list_copy'ing the second argument, the new
semantics mean that we are usually leaking the second List's storage,
since typically there is no remaining pointer to it.  We considered
inventing another list_copy variant that would list_free the second
input, but concluded that for most call sites it isn't worth worrying
about, given the relative compactness of the new List representation.
(Note that in cases where such leakage would happen, the old code
already leaked the second List's header; so we're only discussing
the size of the leak not whether there is one.  I did adjust two or
three places that had been troubling to free that header so that
they manually free the whole second List.)

Patch by me; thanks to David Rowley for review.

Discussion: https://postgr.es/m/11587.1550975080@sss.pgh.pa.us
2019-08-12 11:20:18 -04:00
Tom Lane
8255c7a5ee Phase 2 pgindent run for v12.
Switch to 2.1 version of pg_bsd_indent.  This formats
multiline function declarations "correctly", that is with
additional lines of parameter declarations indented to match
where the first line's left parenthesis is.

Discussion: https://postgr.es/m/CAEepm=0P3FeTXRcU5B2W3jv3PgRVZ-kGUXLGfd42FFhUROO3ug@mail.gmail.com
2019-05-22 13:04:48 -04:00
Tom Lane
428b260f87 Speed up planning when partitions can be pruned at plan time.
Previously, the planner created RangeTblEntry and RelOptInfo structs
for every partition of a partitioned table, even though many of them
might later be deemed uninteresting thanks to partition pruning logic.
This incurred significant overhead when there are many partitions.
Arrange to postpone creation of these data structures until after
we've processed the query enough to identify restriction quals for
the partitioned table, and then apply partition pruning before not
after creation of each partition's data structures.  In this way
we need not open the partition relations at all for partitions that
the planner has no real interest in.

For queries that can be proven at plan time to access only a small
number of partitions, this patch improves the practical maximum
number of partitions from under 100 to perhaps a few thousand.

Amit Langote, reviewed at various times by Dilip Kumar, Jesper Pedersen,
Yoshikazu Imai, and David Rowley

Discussion: https://postgr.es/m/9d7c5112-cb99-6a47-d3be-cf1ee6862a1d@lab.ntt.co.jp
2019-03-30 18:58:55 -04:00
Tom Lane
53bcf5e3db Build "other rels" of appendrel baserels in a separate step.
Up to now, otherrel RelOptInfos were built at the same time as baserel
RelOptInfos, thanks to recursion in build_simple_rel().  However,
nothing in query_planner's preprocessing cares at all about otherrels,
only baserels, so we don't really need to build them until just before
we enter make_one_rel.  This has two benefits:

* create_lateral_join_info did a lot of extra work to propagate
lateral-reference information from parents to the correct children.
But if we delay creation of the children till after that, it's
trivial (and much harder to break, too).

* Since we have all the restriction quals correctly assigned to
parent appendrels by this point, it'll be possible to do plan-time
pruning and never make child RelOptInfos at all for partitions that
can be pruned away.  That's not done here, but will be later on.

Amit Langote, reviewed at various times by Dilip Kumar, Jesper Pedersen,
Yoshikazu Imai, and David Rowley

Discussion: https://postgr.es/m/9d7c5112-cb99-6a47-d3be-cf1ee6862a1d@lab.ntt.co.jp
2019-03-26 18:21:10 -04:00
Tom Lane
bdd9a99aac Propagate lateral-reference information to indirect descendant relations.
create_lateral_join_info() computes a bunch of information about lateral
references between base relations, and then attempts to propagate those
markings to appendrel children of the original base relations.  But the
original coding neglected the possibility of indirect descendants
(grandchildren etc).  During v11 development we noticed that this was
wrong for partitioned-table cases, but failed to realize that it was just
as wrong for any appendrel.  While the case can't arise for appendrels
derived from traditional table inheritance (because we make a flat
appendrel for that), nested appendrels can arise from nested UNION ALL
subqueries.  Failure to mark the lower-level relations as having lateral
references leads to confusion in add_paths_to_append_rel about whether
unparameterized paths can be built.  It's not very clear whether that
leads to any user-visible misbehavior; the lack of field reports suggests
that it may cause nothing worse than minor cost misestimation.  Still,
it's a bug, and it leads to failures of Asserts that I intend to add
later.

To fix, we need to propagate information from all appendrel parents,
not just those that are RELOPT_BASERELs.  We can still do it in one
pass, if we rely on the append_rel_list to be ordered with ancestor
relationships before descendant ones; add assertions checking that.
While fixing this, we can make a small performance improvement by
traversing the append_rel_list just once instead of separately for
each appendrel parent relation.

Noted while investigating bug #15613, though this patch does not fix
that (which is why I'm not committing the related Asserts yet).

Discussion: https://postgr.es/m/3951.1549403812@sss.pgh.pa.us
2019-02-06 12:45:21 -05:00
Tom Lane
f09346a9c6 Refactor planner's header files.
Create a new header optimizer/optimizer.h, which exposes just the
planner functions that can be used "at arm's length", without need
to access Paths or the other planner-internal data structures defined
in nodes/relation.h.  This is intended to provide the whole planner
API seen by most of the rest of the system; although FDWs still need
to use additional stuff, and more thought is also needed about just
what selfuncs.c should rely on.

The main point of doing this now is to limit the amount of new
#include baggage that will be needed by "planner support functions",
which I expect to introduce later, and which will be in relevant
datatype modules rather than anywhere near the planner.

This commit just moves relevant declarations into optimizer.h from
other header files (a couple of which go away because everything
got moved), and adjusts #include lists to match.  There's further
cleanup that could be done if we want to decide that some stuff
being exposed by optimizer.h doesn't belong in the planner at all,
but I'll leave that for another day.

Discussion: https://postgr.es/m/11460.1548706639@sss.pgh.pa.us
2019-01-29 15:48:51 -05:00
Tom Lane
a1b8c41e99 Make some small planner API cleanups.
Move a few very simple node-creation and node-type-testing functions
from the planner's clauses.c to nodes/makefuncs and nodes/nodeFuncs.
There's nothing planner-specific about them, as evidenced by the
number of other places that were using them.

While at it, rename and_clause() etc to is_andclause() etc, to clarify
that they are node-type-testing functions not node-creation functions.
And use "static inline" implementations for the shortest ones.

Also, modify flatten_join_alias_vars() and some subsidiary functions
to take a Query not a PlannerInfo to define the join structure that
Vars should be translated according to.  They were only using the
"parse" field of the PlannerInfo anyway, so this just requires removing
one level of indirection.  The advantage is that now parse_agg.c can
use flatten_join_alias_vars() without the horrid kluge of creating an
incomplete PlannerInfo, which will allow that file to be decoupled from
relation.h in a subsequent patch.

Discussion: https://postgr.es/m/11460.1548706639@sss.pgh.pa.us
2019-01-29 15:26:44 -05:00
Tom Lane
4be058fe9e In the planner, replace an empty FROM clause with a dummy RTE.
The fact that "SELECT expression" has no base relations has long been a
thorn in the side of the planner.  It makes it hard to flatten a sub-query
that looks like that, or is a trivial VALUES() item, because the planner
generally uses relid sets to identify sub-relations, and such a sub-query
would have an empty relid set if we flattened it.  prepjointree.c contains
some baroque logic that works around this in certain special cases --- but
there is a much better answer.  We can replace an empty FROM clause with a
dummy RTE that acts like a table of one row and no columns, and then there
are no such corner cases to worry about.  Instead we need some logic to
get rid of useless dummy RTEs, but that's simpler and covers more cases
than what was there before.

For really trivial cases, where the query is just "SELECT expression" and
nothing else, there's a hazard that adding the extra RTE makes for a
noticeable slowdown; even though it's not much processing, there's not
that much for the planner to do overall.  However testing says that the
penalty is very small, close to the noise level.  In more complex queries,
this is able to find optimizations that we could not find before.

The new RTE type is called RTE_RESULT, since the "scan" plan type it
gives rise to is a Result node (the same plan we produced for a "SELECT
expression" query before).  To avoid confusion, rename the old ResultPath
path type to GroupResultPath, reflecting that it's only used in degenerate
grouping cases where we know the query produces just one grouped row.
(It wouldn't work to unify the two cases, because there are different
rules about where the associated quals live during query_planner.)

Note: although this touches readfuncs.c, I don't think a catversion
bump is required, because the added case can't occur in stored rules,
only plans.

Patch by me, reviewed by David Rowley and Mark Dilger

Discussion: https://postgr.es/m/15944.1521127664@sss.pgh.pa.us
2019-01-28 17:54:23 -05:00
Bruce Momjian
97c39498e5 Update copyright for 2019
Backpatch-through: certain files through 9.4
2019-01-02 12:44:25 -05:00
Tom Lane
c792c7db41 Change more places to be less trusting of RestrictInfo.is_pushed_down.
On further reflection, commit e5d83995e didn't go far enough: pretty much
everywhere in the planner that examines a clause's is_pushed_down flag
ought to be changed to use the more complicated behavior where we also
check the clause's required_relids.  Otherwise we could make incorrect
decisions about whether, say, a clause is safe to use as a hash clause.

Some (many?) of these places are safe as-is, either because they are
never reached while considering a parameterized path, or because there
are additional checks that would reject a pushed-down clause anyway.
However, it seems smarter to just code them all the same way rather
than rely on easily-broken reasoning of that sort.

In support of that, invent a new macro RINFO_IS_PUSHED_DOWN that should
be used in place of direct tests on the is_pushed_down flag.

Like the previous patch, back-patch to all supported branches.

Discussion: https://postgr.es/m/f8128b11-c5bf-3539-48cd-234178b2314d@proxel.se
2018-04-20 15:19:16 -04:00
Bruce Momjian
9d4649ca49 Update copyright for 2018
Backpatch-through: certain files through 9.3
2018-01-02 23:30:12 -05:00