This commit adds columns amoppurpose and amopsortfamily to pg_amop, and
column amcanorderbyop to pg_am. For the moment all the entries in
amcanorderbyop are "false", since the underlying support isn't there yet.
Also, extend the CREATE OPERATOR CLASS/ALTER OPERATOR FAMILY commands with
[ FOR SEARCH | FOR ORDER BY sort_operator_family ] clauses to allow the new
columns of pg_amop to be populated, and create pg_dump support for dumping
that information.
I also added some documentation, although it's perhaps a bit premature
given that the feature doesn't do anything useful yet.
Teodor Sigaev, Robert Haas, Tom Lane
We had originally made the stronger assumption that NOT A refutes any B
if B implies A, but this fails in three-valued logic, because we need to
prove B is false not just that it's not true. However the logic does
go through if B is equal to A.
Recognizing this limited case is enough to handle examples that arise when
we have simplified "bool_var = true" or "bool_var = false" to just "bool_var"
or "NOT bool_var". If we had not done that simplification then the
btree-operator proof logic would have been able to prove that the expressions
were contradictory, but only for identical expressions being compared to the
constants; so handling identical A and B covers all the same cases.
The motivation for doing this is to avoid unexpected asymmetrical behavior
when a partitioned table uses a boolean partitioning column, as in today's
gripe from Dominik Sander.
Back-patch to 8.2, which is as far back as predicate_refuted_by attempts to
do anything at all with NOTs.
The purpose of this change is to eliminate the need for every caller
of SearchSysCache, SearchSysCacheCopy, SearchSysCacheExists,
GetSysCacheOid, and SearchSysCacheList to know the maximum number
of allowable keys for a syscache entry (currently 4). This will
make it far easier to increase the maximum number of keys in a
future release should we choose to do so, and it makes the code
shorter, too.
Design and review by Tom Lane.
8.2beta but never carried out. This avoids repetitive tests of whether the
argument is of scalar or composite type. Also, be a bit more paranoid about
composite arguments in some places where we previously weren't checking.
probably got there via blind copy-and-paste from one of the legitimate
callers, so rearrange and comment that code a bit to make it clearer that
this isn't a necessary prerequisite to hash_create. Per observation
from Robert Haas.
of AND/OR clause branches that predtest.c would attempt to deal with. As
noted in bug #4721, that change disabled proof attempts for sizes of problems
that people are actually expecting it to work for. The original complaint
it was trying to solve was O(N^2) behavior for long IN-lists, so let's try
applying the limit to just ScalarArrayOpExprs rather than everything.
Another case of "foolish consistency" I fear.
Back-patch to 8.2, same as the previous patch was.
predicate_refuted_by: if either top-level input is a single-element list,
reduce it to its lone member before proceeding. This avoids
a useless level of AND-recursion within the recursive proof routines.
It's worth doing because, for example, if the clause is a 100-element
list and the predicate is a 1-element list then we'd otherwise strip
the predicate's list structure 100 times as we iterate through the clause.
It's only needed at top level because there won't be any trivial ANDs below
that --- this situation is an artifact of the decision to represent even
single-item conditions as Lists in the "implicit AND" format, and that format
is only used at the top level of any predicate or restriction condition.
that are set up for execution with ExecPrepareExpr rather than going through
the full planner process. By introducing an explicit notion of "expression
planning", this patch also lays a bit of groundwork for maybe someday
allowing sub-selects in standalone expressions.
operator. The result depends only on the two input operators and the proof
direction (imply or refute), so it's easy to cache. This provides a very
large savings in cases such as Sergey Konoplev's long NOT-IN-list example,
where predtest spends all its time repeatedly figuring out that the same pair
of operators cannot be used to prove anything. (But of course the O(N^2)
behavior still catches up with you eventually.) I'm not convinced it buys
a whole lot when constraint_exclusion isn't turned on, but it's not a lot
of added code so we might as well cache all the time.
AND, OR, or equivalent clauses: if there are too many (more than 100) just
exit without proving anything. This ensures that we don't spend O(N^2) time
trying (and most likely failing) to prove anything about very long IN lists
and similar cases.
Also, install a couple of CHECK_FOR_INTERRUPTS calls to ensure that a long
proof attempt can be interrupted.
Per gripe from Sergey Konoplev.
Back-patch the whole patch to 8.2 and just the CHECK_FOR_INTERRUPTS addition
to 8.1. (The rest of the patch doesn't apply cleanly, and since 8.1 doesn't
show the complained-of behavior anyway, it doesn't seem necessary to work
hard on it.)
into nodes/nodeFuncs, so as to reduce wanton cross-subsystem #includes inside
the backend. There's probably more that should be done along this line,
but this is a start anyway.
constraint yields TRUE for every row of its table, only that it does not
yield FALSE (a NULL result isn't disallowed). This breaks a couple of
implications that would be true in two-valued logic. I had put in one such
mistake in an 8.2.5 patch: foo IS NULL doesn't refute a strict operator
on foo. But there was another in the original 8.2 release: NOT foo doesn't
refute an expression whose truth would imply the truth of foo.
Per report from Rajesh Kumar Mallah.
To preserve the ability to do constraint exclusion with one partition
holding NULL values, extend relation_excluded_by_constraints() to check
for attnotnull flags, and add col IS NOT NULL expressions to the set of
constraints we hope to refute.
checking whether an IS NULL/IS NOT NULL clause is implied or refuted by
a strict function. Per example from Dawid Kuroczko.
Backpatch to 8.2 since this is arguably a performance bug.
more completely. The motivation for having it understand IS NULL at all was
to allow use of "foo IS NULL" as one of the subsets of a partitioning on
"foo", but as reported by Aleksander Kmetec, it wasn't really getting the job
done. Backpatch to 8.2 since this is arguably a performance bug.
available information about the typmod of an expression; namely, Const,
ArrayRef, ArrayExpr, and EXPR and ARRAY SubLinks. In the ArrayExpr and
SubLink cases it wasn't really the data structure's fault, but exprTypmod()
being lazy. This seems like a good idea in view of the expected increase in
typmod usage from Teodor's work to allow user-defined types to have typmods.
In particular this responds to the concerns we had about eliminating the
special-purpose hack that exprTypmod() used to have for BPCHAR Consts.
We can now tell whether or not such a Const has been cast to a specific
length, and report or display properly if so.
initdb forced due to changes in stored rules.
cases. Operator classes now exist within "operator families". While most
families are equivalent to a single class, related classes can be grouped
into one family to represent the fact that they are semantically compatible.
Cross-type operators are now naturally adjunct parts of a family, without
having to wedge them into a particular opclass as we had done originally.
This commit restructures the catalogs and cleans up enough of the fallout so
that everything still works at least as well as before, but most of the work
needed to actually improve the planner's behavior will come later. Also,
there are not yet CREATE/DROP/ALTER OPERATOR FAMILY commands; the only way
to create a new family right now is to allow CREATE OPERATOR CLASS to make
one by default. I owe some more documentation work, too. But that can all
be done in smaller pieces once this infrastructure is in place.
the SQL spec, viz IS NULL is true if all the row's fields are null, IS NOT
NULL is true if all the row's fields are not null. The former coding got
this right for a limited number of cases with IS NULL (ie, those where it
could disassemble a ROW constructor at parse time), but was entirely wrong
for IS NOT NULL. Per report from Teodor.
I desisted from changing the behavior for arrays, since on closer inspection
it's not clear that there's any support for that in the SQL spec. This
probably needs more consideration.
This doesn't matter too much for ordinary NOTs, since prepqual.c does
its best to get rid of those, but it helps with IS NOT TRUE clauses
which the rule rewriter likes to insert. Per example from Martin Lesser.
they were broken-out AND or OR lists. The least grotty way to do this
seemed to be to set up a general mechanism for handling nodes as though
they were ANDs or ORs. There's no other immediate use for it, but perhaps
we might want to use the mechanism someday for things like BETWEEN
SYMMETRIC.
predicate_implied_by() to detect redundant filter conditions, but forgot
that predicate_implied_by() assumes its first argument contains only
immutable functions. Add a check to guarantee that. Also, test to see
if filter conditions can be discarded because they are redundant with
the predicate of a partial index.