A CaseTestExpr is currently being put into
JsonValueExpr.formatted_expr as placeholder for the result of
evaluating JsonValueExpr.raw_expr, which in turn is evaluated
separately. Though, there's no need for this indirection if
raw_expr itself can be embedded into formatted_expr and evaluated
as part of evaluating the latter, especially as there is no
special reason to evaluate it separately. So this commit makes it
so. As a result, JsonValueExpr.raw_expr no longer needs to be
evaluated in ExecInterpExpr(), eval_const_exprs_mutator() etc. and
is now only used for displaying the original "unformatted"
expression in ruleutils.c. Comments about and the code manipulating
formatted_expr is updated to mention that it is now always set and
is the expression that gives a JsonValueExpr its runtime value.
While at it, this also removes the function makeCaseTestExpr(),
because the code in makeJsonConstructorExpr() looks more readable
without it IMO and isn't used by anyone else either.
Finally, a note is added in the comment above CaseTestExpr's
definition that JsonConstructorExpr is also using it.
Backpatched to 16 from the development branch to keep the code in
sync across branches.
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/CA+HiwqE4XTdfb1nW=Ojoy_tQSRhYt-q_kb6i5d4xcKyrLC1Nbg@mail.gmail.com
This commit reverts the work done by commits 3ba59ccc89 and 72e78d831a.
Those commits were incorrect in asserting that we never acquire any other
heavy-weight lock after acquring page lock other than relation extension
lock. We can acquire a lock on catalogs while doing catalog look up after
acquring page lock.
This won't impact any existing feature but we need to think some other way
to achieve this before parallelizing other write operations or even
improving the parallelism in vacuum (like allowing multiple workers
for an index).
Reported-by: Jaime Casanova
Author: Amit Kapila
Backpatch-through: 13
Discussion: https://postgr.es/m/CAJKUy5jffnRKNvRHKQ0LynRb0RJC-o4P8Ku3x9vGAVLwDBWumQ@mail.gmail.com
An outer join cannot be formed using an input path that is parameterized
by a value that is supposed to be nulled by the outer join. This is
obviously nonsensical, and it could lead to a bad plan being selected;
although currently it seems that we'll hit various sanity-check
assertions first.
I think that such cases were formerly prevented by the delay_upper_joins
mechanism, but now that that's gone we need an explicit check.
(Perhaps we should avoid generating baserel paths that could
lead to this situation in the first place; but it seems like
having a defense at the join level would be a good idea anyway.)
Richard Guo and Tom Lane, per report from Jaime Casanova
Discussion: https://postgr.es/m/CAJKUy5g2uZRrUDZJ8p-=giwcSHVUn0c9nmdxPSY0jF0Ov8VoEA@mail.gmail.com
If the inner-side expressions contain PARAM_EXEC Params, we must
re-hash whenever the values of those Params change. The executor
mechanism for that exists already, but we failed to invoke it because
finalize_plan() neglected to search the Hash.hashkeys field for
Params. This allowed a previous scan's hash table to be re-used
when it should not be, leading to rows missing from the join's output.
(I believe incorrectly-included join rows are impossible however,
since checking the real hashclauses would reject false matches.)
This bug is very ancient, dating probably to d24d75ff1 of 7.4.
Sadly, this simple fix depends on the plan representational changes
made by 2abd7ae9b, so it will only work back to v12. I thought
about trying to make some kind of hack for v11, but I'm leery
of putting code significantly different from what is used in the
newer branches into a nearly-EOL branch. Seeing that the bug
escaped detection for a full twenty years, problematic cases
must be rare; so I don't feel too awful about leaving v11 as-is.
Per bug #17985 from Zuming Jiang. Back-patch to v12.
Discussion: https://postgr.es/m/17985-748b66607acd432e@postgresql.org
I removed the delay_upper_joins mechanism in commit b448f1c8d,
reasoning that it was only needed when we have a single-table
(SELECT ... WHERE) as the immediate RHS child of a left join,
and we could get rid of that by hoisting the WHERE condition into
the parent join's quals. However that new code missed a case:
we could have "foo LEFT JOIN ((SELECT ... WHERE) LEFT JOIN bar)",
and if the two left joins can be commuted then we now have the
problematic query shape. We can fix this too easily enough,
by allowing the syntactically-lower left join to pass through
its parent qual location pointer recursively. That lets
prepjointree.c discard the SELECT by temporarily hoisting the
WHERE condition into the ancestor join's qual.
Per bug #17978 from Zuming Jiang.
Discussion: https://postgr.es/m/17978-12f3d93a55297266@postgresql.org
This avoids an assertion failure when outer joins are rearranged
per identity 3. Listing only the baserels from a PlaceHolderVar's
ph_lateral set should be enough to ensure that the required values
are available when we need to compute the PHV --- it's what we
did before inventing nullingrel sets, after all. It's a bit
unsatisfying; but with beta2 hard upon us, there's not time to
look for an aesthetically cleaner fix.
Richard Guo and Tom Lane
Discussion: https://postgr.es/m/CAMbWs48Jcw-NvnxT23WiHP324wG44DvzcH1j4hc0Zn+3sR9cfg@mail.gmail.com
It turns out that the fixes we applied in commits bfd332b3f
and 63e4f13d2 were not nearly enough to solve the problem.
We'd focused narrowly on subquery RTEs with lateral references,
but lateral references can occur in several other RTE kinds
such as function RTEs. Putting the same hack into half a dozen
code paths seems quite unattractive. Hence, revert the code changes
(but not the test cases) from those commits and instead solve it
centrally in identify_current_nestloop_params(), as Richard proposed
originally. This is a bit annoying because it could mask erroneous
nullingrels in nestloop params that are generated from non-LATERAL
parameterized paths; but on balance I don't see a better way.
Maybe at some future time we'll be motivated to find a more rigorous
approach to nestloop params, but that's not happening for beta2.
Richard Guo and Tom Lane
Discussion: https://postgr.es/m/CAMbWs48Jcw-NvnxT23WiHP324wG44DvzcH1j4hc0Zn+3sR9cfg@mail.gmail.com
Here we adjust relation_has_unique_index_for() so that it no longer makes
use of partial unique indexes as uniqueness proofs. It is incorrect to
use these as the predicates used by check_index_predicates() to set
predOK makes use of not only baserestrictinfo quals as proofs, but also
qual from join conditions. For relation_has_unique_index_for()'s case, we
need to know the relation is unique for a given set of columns before any
joins are evaluated, so if predOK was only set to true due to some join
qual, then it's unsafe to use such indexes in
relation_has_unique_index_for(). The final plan may not even make use
of that index, which could result in reading tuples that are not as
unique as the planner previously expected them to be.
Bug: #17975
Reported-by: Tor Erik Linnerud
Backpatch-through: 11, all supported versions
Discussion: https://postgr.es/m/17975-98a90c156f25c952%40postgresql.org
Since commit b448f1c8d, we've been able to remove left joins
(that are otherwise removable) even when they are underneath
other left joins, a case that was previously prevented by a
delay_upper_joins check. This is a clear improvement, but
it has a surprising side-effect: it's now possible that there
are EquivalenceClasses whose relid sets mention the removed
baserel and/or outer join. If we fail to clean those up,
we may drop essential join quals due to not having any join
level that appears to satisfy their relid sets.
(It's not quite 100% clear that this was impossible before.
But the lack of complaints since we added join removal a dozen
years ago strongly suggests that it was impossible.)
Richard Guo and Tom Lane, per bug #17976 from Zuming Jiang
Discussion: https://postgr.es/m/17976-4b638b525e9a983b@postgresql.org
The issue fixed in commit bfd332b3f can also bite Memoize plans,
because of the separate copies of lateral reference Vars made
by paraminfo_get_equal_hashops. Apply the same hacky fix there.
(In passing, clean up shaky grammar in the existing comments
for this function.)
Richard Guo
Discussion: https://postgr.es/m/CAMbWs4-krwk0Wbd6WdufMAupuou_Ua73ijQ4XQCr1Mb5BaVtKQ@mail.gmail.com
If we apply outer join identity 3 when relation C is a subquery
having lateral references to relation B, then the lateral references
within C continue to bear the original syntactically-correct
varnullingrels marks, but that won't match what is available from
the outer side of the nestloop. Compensate for that in
process_subquery_nestloop_params(). This is a slightly hacky fix,
but we certainly don't want to re-plan C in toto for each possible
outer join order, so there's not a lot of better alternatives.
Richard Guo and Tom Lane, per report from Markus Winand
Discussion: https://postgr.es/m/DFBB2D25-DE97-49CA-A60E-07C881EA59A7@winand.at
Split nbtree's _bt_getbuf function is two: code that read locks or write
locks existing pages remains in _bt_getbuf, while code that deals with
allocating new pages is moved to a new, dedicated function called
_bt_allocbuf. This simplifies most _bt_getbuf callers, since it is no
longer necessary for them to pass a heaprel argument. Many of the
changes to nbtree from commit 61b313e4 can be reverted. This minimizes
the divergence between HEAD/PostgreSQL 16 and earlier release branches.
_bt_allocbuf replaces the previous nbtree idiom of passing P_NEW to
_bt_getbuf. There are only 3 affected call sites, all of which continue
to pass a heaprel for recovery conflict purposes. Note that nbtree's
use of P_NEW was superficial; nbtree never actually relied on the P_NEW
code paths in bufmgr.c, so this change is strictly mechanical.
GiST already took the same approach; it has a dedicated function for
allocating new pages called gistNewBuffer(). That factor allowed commit
61b313e4 to make much more targeted changes to GiST.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://postgr.es/m/CAH2-Wz=8Z9qY58bjm_7TAHgtW6RzZ5Ke62q5emdCEy9BAzwhmg@mail.gmail.com
A placeholder that references the outer join's relid in ph_eval_at
is logically "above" the join, and therefore we can't remove its
PlaceHolderInfo: it might still be used somewhere in the query.
This was not an issue pre-v16 because we failed to remove the join
at all in such cases. The new outer-join-aware-Var infrastructure
permits deducing that it's okay to remove the join, but then we
have to clean up correctly afterwards.
Report and fix by Richard Guo
Discussion: https://postgr.es/m/CAMbWs4_tuVn9EwwMcggGiZJWWstdXX_ci8FeEU17vs+4nLgw3w@mail.gmail.com
When we're deleting a no-op LEFT JOIN from the query, we must remove
the join's joinclauses from surviving relations' joininfo lists.
The invention of "cloned" clauses in 2489d76c4 broke the logic for
that; it'd fail to remove clones that include OJ relids outside the
doomed join's min relid sets, which could happen if that join was
previously discovered to commute with some other join.
This accidentally failed to cause problems in the majority of cases,
because we'd never decide that such a cloned clause was evaluatable at
any surviving join. However, Richard Guo discovered a case where that
did happen, leading to "no relation entry for relid" errors later.
Also, adding assertions that a non-removed clause contains no Vars from
the doomed join exposes that there are quite a few existing regression
test cases where the problem happens but is accidentally not exposed.
The fix for this is just to include the target join's commute_above_r
and commute_below_l sets in the relid set we test against when
deciding whether a join clause is "pushed down" and thus not
removable.
While at it, do a little refactoring: the join's relid set can be
computed inside remove_rel_from_query rather than in the caller.
Patch by me; thanks to Richard Guo for review.
Discussion: https://postgr.es/m/CAMbWs4_PHrRqTKDNnTRsxxQy6BtYCVKsgXm1_gdN2yQ=kmcO5g@mail.gmail.com
We've had multiple issues with the clause_is_computable_at logic that
I introduced in 2489d76c4: it's been known to accept more than one
clone of the same qual at the same plan node, and also to accept no
clones at all. It's looking impractical to get it 100% right on the
basis of the currently-stored information, so fix it by introducing a
new RestrictInfo field "incompatible_relids" that explicitly shows
which outer joins a given clone mustn't be pushed above.
In principle we could populate this field in every RestrictInfo, but
that would cost space and there doesn't presently seem to be a need
for it in general. Also, while deconstruct_distribute_oj_quals can
easily fill the field with the remaining members of the commutative
join set that it's considering, computing it in the general case
seems again pretty complicated. So for now, just fill it for
clone quals.
Along the way, fix a bug that may or may not be only latent:
equivclass.c was generating replacement clauses with is_pushed_down
and has_clone/is_clone markings that didn't match their
required_relids. This led me to conclude that leaving the clone flags
out of make_restrictinfo's purview wasn't such a great idea after all,
so add them.
Per report from Richard Guo.
Discussion: https://postgr.es/m/CAMbWs48EYi_9-pSd0ORes1kTmTeAjT4Q3gu49hJtYCbSn2JyeA@mail.gmail.com
Use the clause's required_relids not clause_relids for testing
whether it is computable at the current join level, if it is a
clone clause generated by deconstruct_distribute_oj_quals().
Arguably, this is more correct and we should do it for all clauses;
that would at least remove the handwavy claim that we are doing
it to save cycles compared to inspecting Vars individually.
However, attempting to do that exposes that we are not being careful
to compute an accurate value for required_relids in all cases.
I'm unsure whether it's a good idea to attempt to do that for v16,
or leave it as future clean-up. In the meantime, this quick hack
demonstrably fixes some cases, so let's squeeze it in for beta1.
Patch by me, but great thanks to Richard Guo for investigation
and testing. The new test cases are all modeled on his examples.
Discussion: https://postgr.es/m/CAMbWs4-_vwkBij4XOQ5ukxUvLgwTm0kS5_DO9CicUeKbEfKjUw@mail.gmail.com
Run pgindent, pgperltidy, and reformat-dat-files.
This set of diffs is a bit larger than typical. We've updated to
pg_bsd_indent 2.1.2, which properly indents variable declarations that
have multi-line initialization expressions (the continuation lines are
now indented one tab stop). We've also updated to perltidy version
20230309 and changed some of its settings, which reduces its desire to
add whitespace to lines to make assignments etc. line up. Going
forward, that should make for fewer random-seeming changes to existing
code.
Discussion: https://postgr.es/m/20230428092545.qfb3y5wcu4cm75ur@alvherre.pgsql
In commit 9df8f903e I (tgl) switched join_is_removable() from
using the min relid sets of the join under consideration to
using its full syntactic relid sets. This was a mistake,
as it allowed join removal in cases where a reference to the
join output would survive in some syntactically-lower join
condition. Revert to the former coding.
Richard Guo
Discussion: https://postgr.es/m/CAMbWs4-EU9uBGSP7G-iTwLBhRQ=rnZKvFDhD+n+xhajokyPCKg@mail.gmail.com
Pass it the RestrictInfo under consideration, not just the
clause_relids. This should save some trivial amount of
code at the call sites, and it gives us more flexibility
about what clause_is_computable_at() does. There's no
actual functional change here, though.
Discussion: https://postgr.es/m/3564467.1684352557@sss.pgh.pa.us
After applying outer-join identity 3 in the forward direction,
it was possible for the planner to mistakenly apply a qual clause
from above the two outer joins at the now-lower join level.
This can give the wrong answer, since a value that would get nulled
by the now-upper join might not yet be null.
To fix, when we perform such a transformation, consider that the
now-lower join hasn't really completed the outer join it's nominally
responsible for and thus its relid set should not include that OJ's
relid (nor should its output Vars have that nullingrel bit set).
Instead we add those bits when the now-upper join is performed.
The existing rules for qual placement then suffice to prevent
higher qual clauses from dropping below the now-upper join.
There are a few complications from needing to consider transitive
closures in case multiple pushdowns have happened, but all in all
it's not a very complex patch.
This is all new logic (from 2489d76c4) so no need to back-patch.
The added test cases all have the same results as in v15.
Tom Lane and Richard Guo
Discussion: https://postgr.es/m/0b819232-4b50-f245-1c7d-c8c61bf41827@postgrespro.ru
It seems like the code that these checks are backstopping may have
a few bugs left in it. Use a test-and-elog so that the tests are
performed even in non-assert builds, and so that we get something
more informative than "server closed the connection" on failure.
Committed separately with the idea that eventually we'll revert
this. It might be awhile though.
Discussion: https://postgr.es/m/3014965.1684293045@sss.pgh.pa.us
This is equivalent to a revert of f193883 and fb32748, with the addition
that the declaration of the SQLValueFunction node needs to gain a couple
of node_attr for query jumbling. The performance impact of removing the
function call inlining is proving to be too huge for some workloads
where these are used. A worst-case test case of involving only simple
SELECT queries with a SQL keyword is proving to lead to a reduction of
10% in TPS via pgbench and prepared queries on a high-end machine.
None of the tests I ran back for this set of changes saw such a huge
gap, but Alexander Lakhin and Andres Freund have found that this can be
noticeable. Keeping the older performance would mean to do more
inlining in the executor when using COERCE_SQL_SYNTAX for a function
expression, similarly to what SQLValueFunction does. This requires more
redesign work and there is little time until 16beta1 is released, so for
now reverting the change is the best way forward, bringing back the
previous performance.
Bump catalog version.
Reported-by: Alexander Lakhin
Discussion: https://postgr.es/m/b32bed1b-0746-9b20-1472-4bdc9ca66d52@gmail.com
I've had a bee in my bonnet for some time about getting rid of
RestrictInfo.is_pushed_down, because it's squishily defined and
requires not-inexpensive extra tests to use (cf RINFO_IS_PUSHED_DOWN).
In commit 2489d76c4, I tried to make remove_rel_from_query() not
depend on that macro; but the replacement test is buggy,
as exposed by a report from Rushabh Lathia and Robert Haas.
That change was pretty incidental to the main goal of 2489d76c4,
so let's just revert it for now. (Getting rid of is_pushed_down
is still far away, anyway.)
Discussion: https://postgr.es/m/CA+TgmoYco=hmg+iX1CW9Y1_CzNoSL81J03wUG-d2_3=rue+L2A@mail.gmail.com
If an SRF in the FROM clause references a table having row-level
security policies, and we inline that SRF into the calling query,
we neglected to mark the plan as potentially dependent on which
role is executing it. This could lead to later executions in the
same session returning or hiding rows that should have been hidden
or returned instead.
Our thanks to Wolfgang Walther for reporting this problem.
Stephen Frost and Tom Lane
Security: CVE-2023-2455
This reverts commit e056c557ae and minor later fixes thereof.
There's a few problems in this new feature -- most notably regarding
pg_upgrade behavior, but others as well. This new feature is not in any
way critical on its own, so instead of scrambling to fix it we revert it
and try again in early 17 with these issues in mind.
Discussion: https://postgr.es/m/3801207.1681057430@sss.pgh.pa.us
Our policy since commit ab77a5a45 has been that a plan node having
any initplans is automatically not parallel-safe. (This could be
relaxed, but not today.) clean_up_removed_plan_level neglected
this, and could attach initplans to a parallel-safe child plan
node without clearing the plan's parallel-safe flag. That could
lead to "subplan was not initialized" errors at runtime, in case
an initplan referenced another one and only the referencing one
got transmitted to parallel workers.
The fix in clean_up_removed_plan_level is trivial enough.
materialize_finished_plan also moves initplans from one node
to another, but it's okay because it already copies the source
node's parallel_safe flag. The other place that does this kind
of thing is standard_planner's hack to inject a top-level Gather
when debug_parallel_query is active. But that's actually dead
code given that we're correctly enforcing the "initplans aren't
parallel safe" rule, so just replace it with an Assert that
there are no initplans.
Also improve some related comments.
Normally we'd add a regression test case for this sort of bug.
The mistake itself is already reached by existing tests, but there
is accidentally no visible problem. The only known test case that
creates an actual failure seems too indirect and fragile to justify
keeping it as a regression test (not least because it fails to fail
in v11, though the bug is clearly present there too).
Per report from Justin Pryzby. Back-patch to all supported branches.
Discussion: https://postgr.es/m/ZDVt6MaNWkRDO1LQ@telsasoft.com
We now create pg_constaint rows for NOT NULL constraints with
contype='n'.
We propagate these constraints during operations such as adding
inheritance relationships, creating and attaching partitions, creating
tables LIKE other tables. We mostly follow the well-known rules of
conislocal and coninhcount that we have for CHECK constraints, with some
adaptations; for example, as opposed to CHECK constraints, we don't
match NOT NULL ones by name when descending a hierarchy to alter it;
instead we match by column number. This means we don't require the
constraint names to be identical across a hierarchy.
For now, we omit them from system catalogs. Maybe this is worth
reconsidering. We don't support NOT VALID nor DEFERRABLE clauses
either; these can be added as separate features later (this patch is
already large and complicated enough.)
This has been very long in the making. The first patch was written by
Bernd Helmle in 2010 to add a new pg_constraint.contype value ('n'),
which I (Álvaro) then hijacked in 2011 and 2012, until that one was
killed by the realization that we ought to use contype='c' instead:
manufactured CHECK constraints. However, later SQL standard
development, as well as nonobvious emergent properties of that design
(mostly, failure to distinguish them from "normal" CHECK constraints as
well as the performance implication of having to test the CHECK
expression) led us to reconsider this choice, so now the current
implementation uses contype='n' again.
In 2016 Vitaly Burovoy also worked on this feature[1] but found no
consensus for his proposed approach, which was claimed to be closer to
the letter of the standard, requiring additional pg_attribute columns to
track the OID of the NOT NULL constraint for that column.
[1] https://postgr.es/m/CAKOSWNkN6HSyatuys8xZxzRCR-KL1OkHS5-b9qd9bf1Rad3PLA@mail.gmail.com
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Author: Bernd Helmle <mailings@oopsware.de>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Discussion: https://postgr.es/m/CACA0E642A0267EDA387AF2B%40%5B172.26.14.62%5D
Discussion: https://postgr.es/m/AANLkTinLXMOEMz+0J29tf1POokKi4XDkWJ6-DDR9BKgU@mail.gmail.com
Discussion: https://postgr.es/m/20110707213401.GA27098@alvh.no-ip.org
Discussion: https://postgr.es/m/1343682669-sup-2532@alvh.no-ip.org
Discussion: https://postgr.es/m/CAKOSWNkN6HSyatuys8xZxzRCR-KL1OkHS5-b9qd9bf1Rad3PLA@mail.gmail.com
Discussion: https://postgr.es/m/20220817181249.q7qvj3okywctra3c@alvherre.pgsql
Merge and hash joins can support antijoin with the non-nullable input
on the right, using very simple combinations of their existing logic
for right join and anti join. This gives the planner more freedom
about how to order the join. It's particularly useful for hash join,
since we may now have the option to hash the smaller table instead
of the larger.
Richard Guo, reviewed by Ronan Dunklau and myself
Discussion: https://postgr.es/m/CAMbWs48xh9hMzXzSy3VaPzGAz+fkxXXTUbCLohX1_L8THFRm2Q@mail.gmail.com
This is done in preparation for logical decoding on standby, which needs to
include whether visibility affecting WAL records are about a (user) catalog
table. Which is only known for the table, not the indexes.
It's also nice to be able to pass the heap relation to GlobalVisTestFor() in
vacuumRedirectAndPlaceholder().
Author: "Drouvot, Bertrand" <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/21b700c3-eecf-2e05-a699-f8c78dd31ec7@gmail.com
Full and right outer joins were not supported in the initial
implementation of Parallel Hash Join because of deadlock hazards (see
discussion). Therefore FULL JOIN inhibited parallelism, as the other
join strategies can't do that in parallel either.
Add a new PHJ phase PHJ_BATCH_SCAN that scans for unmatched tuples on
the inner side of one batch's hash table. For now, sidestep the
deadlock problem by terminating parallelism there. The last process to
arrive at that phase emits the unmatched tuples, while others detach and
are free to go and work on other batches, if there are any, but
otherwise they finish the join early.
That unfairness is considered acceptable for now, because it's better
than no parallelism at all. The build and probe phases are run in
parallel, and the new scan-for-unmatched phase, while serial, is usually
applied to the smaller of the two relations and is either limited by
some multiple of work_mem, or it's too big and is partitioned into
batches and then the situation is improved by batch-level parallelism.
Author: Melanie Plageman <melanieplageman@gmail.com>
Author: Thomas Munro <thomas.munro@gmail.com>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/CA%2BhUKG%2BA6ftXPz4oe92%2Bx8Er%2BxpGZqto70-Q_ERwRaSyA%3DafNg%40mail.gmail.com
We were transferring partPruneInfos from PlannerInfo into PlannerGlobal
wrong, essentially relying on all of them being transferred, and
adjusting their list indexes based on that. But apparently it's
possible that some of them are skipped, so that strategy leads to a
corrupted execution tree. Instead, adjust each Append/MergeAppend's
partpruneinfo index as we copy from one list to the other, which seems
safer anyway. This requires adjusting the RT offset of the RTE
referenced in each partPruneInfo ahead of actually adjusting the RTE
itself, which seems a bit too ad-hoc.
This problem was introduced by commit ec38694894. However, it may be
that we no longer require the change introduced there, so perhaps we
should revert both the present commit and that one.
Problem noticed by sqlsmith.
Author: Amit Langote <amitlangote09@gmail.com
Discussion: https://postgr.es/m/CA+HiwqG6tbc2oadsbyyy24b2AL295XHQgyLRWghmA7u_SL1K8A@mail.gmail.com
MERGE planning could fail with "variable not found in subplan target
list" if the target table is partitioned and all its partitions are
excluded at plan time, or in the case where it has no partitions but
used to have some. This happened because distribute_row_identity_vars
thought it didn't need to make the target table's reltarget list
fully valid; but if we generate a join plan then that is required
because the dummy Result node's tlist will be made from the reltarget.
The same logic appears in distribute_row_identity_vars in v14,
but AFAICS the problem is unreachable in that branch for lack of
MERGE. In other updating statements, the target table is always
inner-joined to any other tables, so if the target is known dummy
then the whole plan reduces to dummy, so no join nodes are created.
So I'll refrain from back-patching this code change to v14 for now.
Per report from Alvaro Herrera.
Discussion: https://postgr.es/m/20230328112248.6as34mlx5sr4kltg@alvherre.pgsql
When extracting an attr from a cached tuple in the syscache with
SysCacheGetAttr the isnull parameter must be checked in case the
attr cannot be NULL. For cases when this is known beforehand, a
wrapper is introduced which perform the errorhandling internally
on behalf of the caller, invoking an elog in case of a NULL attr.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Reviewed-by: David Rowley <dgrowleyml@gmail.com>
Discussion: https://postgr.es/m/AD76405E-DB45-46B6-941F-17B1EB3A9076@yesql.se
As demonstrated by David Johnston, the Memoize cache hit ratio calculation
wasn't quite correct.
This change only affects the estimated hit ratio when the estimated number
of entries to cache is estimated not to fit inside the cache. For
example, if we expect 2000 distinct cache key values and only expect to be
able to cache 1000 of those at once due to memory constraints, with an
estimate of 10000 calls, if we could store all entries then the hit ratio
should be 80% to account for the first 2000 of the 10000 calls to be a
cache miss due to the value not being cached yet. If we can only store
1000 entries for each of the 2000 distinct possible values at once then
the 80% should be reduced by half to make the final estimate of 40%.
Previously, the calculation would have produced an estimated hit ratio of
30%, which wasn't correct.
Apply to master only so as not to destabilize plans in the back branches.
Reported-by: David G. Johnston
Discussion: https://postgr.es/m/CAKFQuwZEmcNk3YQo2Xj4EDUOdY6qakad31rOD1Vc4q1_s68-Ew@mail.gmail.com
Discussion: https://postgr.es/m/CAApHDvrV44LwiF4W_qf_RpbGYWSgp1kF=cZr+kTRRaALUfmXqw@mail.gmail.com
The Memoize executor node stores the cache key values along with the
tuple(s) which were found in the outer node which match each key value,
however, when the planner tried to estimate how many entries could be
stored in the cache, it didn't take into account that the cache key must
also be stored. In many cases, this won't make a large difference as the
key is likely small in comparison to the tuple(s) being stored, however,
it's not impossible to craft cases where the key could take more memory
than the tuple(s) stored for it.
Here we adjust the planner so it takes into account the estimated amount
of memory to store the cache key. Effectively, this change will reduce
the estimated cache hit ratio when it's thought that not all items will
fit in the cache, thus Memoize will become more expensive in such cases.
The executor already takes into account the memory consumed by the cache
key, so here we only need to adjust the planner.
Discussion: https://postgr.es/m/CAApHDvqGErGuyBfQvBQrTCHDbzLTqoiW=_G9sOzeFxWEc_7auA@mail.gmail.com
The logic added in 9d9c02ccd to determine when a qual can be used as a
WindowClause run condition failed to correctly check for subqueries in the
qual. This was being done correctly for normal subquery qual pushdowns,
it's just that 9d9c02ccd failed to follow the lead on that.
This also fixes various other cases where transforming the qual into a
WindowClause run condition in the subquery should have been disallowed.
Bug: #17826
Reported-by: Anban Company
Discussion: https://postgr.es/m/17826-7d8750952f19a5f5@postgresql.org
Backpatch-through: 15, where 9d9c02ccd was introduced.
create_append_path() would only apply get_baserel_parampathinfo
when the path is for a partitioned table, but it's also potentially
useful for paths for UNION ALL appendrels. Specifically, that
supports building a Memoize path atop this one.
While we're in the vicinity, delete some dead code in
create_merge_append_plan(): there's no need for it to support
parameterized MergeAppend paths, and it doesn't look like that
is going to change anytime soon. It'll be easy enough to undo
this when/if it becomes useful.
Richard Guo
Discussion: https://postgr.es/m/CAMbWs4_ABSu4PWG2rE1q10tJugEXHWgru3U8dAgkoFvgrb6aEA@mail.gmail.com
preprocess_targetlist thought PHVs couldn't appear here.
It was mistaken, as per report from Önder Kalacı.
Surveying other pull_var_clause calls, I noted no similar errors,
but I did notice that qual_is_pushdown_safe's assertion about
!contain_window_function was pointless, because the following
pull_var_clause call would complain about them anyway. In HEAD
only, remove the redundant Assert and improve the commentary.
Discussion: https://postgr.es/m/CACawEhUuum-gC_2S3sXLTcsk7bUSPSHOD+g1ZpfKaDK-KKPPWA@mail.gmail.com
adjust_appendrel_attrs can't transfer nullingrel labeling to a non-Var
translation expression (mainly because it's too late to wrap such an
expression in a PlaceHolderVar). I'd supposed in commit 2489d76c4
that that restriction was unreachable because we'd not attempt to push
problematic clauses down to an appendrel child relation. I forgot that
set_append_rel_size blindly converts all the parent rel's joininfo
clauses to child clauses, and that list could well contain clauses
from above a nulling outer join.
We might eventually have to devise a direct fix for this implementation
restriction, but for now it seems enough to filter out troublesome
clauses while constructing the child's joininfo list. Such clauses
are certainly not useful while constructing paths for the child rel;
they'll have to be applied later when we join the completed appendrel
to something else. So we don't need them here, and omitting them from
the list should save a few cycles while processing the child rel.
Per bug #17832 from Marko Tiikkaja.
Discussion: https://postgr.es/m/17832-d0a8106cdf1b722e@postgresql.org
This function has been semi-deprecated ever since we invented
bms_next_member(). Its habit of scribbling on the input bitmapset
isn't great, plus for sufficiently large bitmapsets it would take
O(N^2) time to complete a loop. Now we have the additional problem
that reducing the input to empty while leaving it still accessible
would violate a planned invariant. So let's just get rid of it,
after updating the few extant callers to use bms_next_member().
Patch by me; thanks to Nathan Bossart and Richard Guo for review.
Discussion: https://postgr.es/m/1159933.1677621588@sss.pgh.pa.us
It's possible, in admittedly-rather-contrived cases, for an eclass
to generate a derived "join" qual that constrains the post-outer-join
value(s) of some RHS variable(s) without mentioning the LHS at all.
While the mechanisms were set up to work for this, we fell foul of
the "get_common_eclass_indexes" filter installed by commit 3373c7155:
it could decide that such an eclass wasn't relevant to the join, so
that the required qual clause wouldn't get emitted there or anywhere
else.
To fix, apply get_common_eclass_indexes only at inner joins, where
its rule is still valid. At an outer join, fall back to examining all
eclasses that mention either input (or the OJ relid, though it should
be impossible for an eclass to mention that without mentioning either
input). Perhaps we can improve on that later, but the cost/benefit of
adding more complexity to skip some irrelevant eclasses is dubious.
To allow cheaply distinguishing outer from inner joins, pass the
ojrelid to generate_join_implied_equalities as a separate argument.
This also allows cleaning up some sloppiness that had crept into
the definition of its join_relids argument, and it allows accurate
calculation of nominal_join_relids for a child outer join. (The
latter oversight seems not to have been a live bug, but it certainly
could have caused problems in future.)
Also fix what might be a live bug in check_index_predicates: it was
being sloppy about what it passed to generate_join_implied_equalities.
Per report from Richard Guo.
Discussion: https://postgr.es/m/CAMbWs4-DsTBfOvXuw64GdFss2=M5cwtEhY=0DCS7t2gT7P6hSA@mail.gmail.com