teaching the latter to accept either RestrictInfo nodes or bare
clause expressions; and cache the selectivity result in the RestrictInfo
node when possible. This extends the caching behavior of approx_selectivity
to many more contexts, and should reduce duplicate selectivity
calculations.
first time generate an OR indexscan for a two-column index when the WHERE
condition is like 'col1 = foo AND (col2 = bar OR col2 = baz)' --- before,
the OR had to be on the first column of the index or we'd not notice the
possibility of using it. Some progress towards extracting OR indexscans
from subclauses of an OR that references multiple relations, too, although
this code is #ifdef'd out because it needs more work.
fields: now they are valid whenever the clause is a binary opclause,
not only when it is a potential join clause (there is a new boolean
field canjoin to signal the latter condition). This lets us avoid
recomputing the relid sets over and over while examining indexes.
Still more work to do to make this as useful as it could be, because
there are places that could use the info but don't have access to the
RestrictInfo node.
just look for common clauses that can be pulled out of ORs. Per recent
discussion, extracting common clauses seems to be the only really useful
effect of normalization, and if we do it explicitly then we can avoid
cluttering the qual with partially-redundant duplicated expressions, which
was an unpleasant side-effect of the old approach.
about whether it is applied before or after eval_const_expressions().
I believe there were some corner cases where the system would fail to
recognize that a partial index is applicable because of the previous
inconsistency. Store normal rather than 'implicit AND' representations
of constraints and index predicates in the catalogs.
initdb forced due to representation change of constraints/predicates.
----------------------------------------------------------------------
/*
* relation_byte_size
* Estimate the storage space in bytes for a given number of tuples
* of a given width (size in bytes).
*/
static double
relation_byte_size(double tuples, int width)
{
return tuples * (MAXALIGN(width) + MAXALIGN(sizeof(HeapTupleData)));
}
----------------------------------------------------------------------
Shouldn't this be HeapTupleHeaderData and not HeapTupleData ?
(Of course, from a costing perspective these shouldn't be very different but ...)
Sailesh Krishnamurthy
where a joinclause is redundant with a restriction clause. Original coding
believed this was impossible and didn't need to be checked for, but that
was a thinko ...
a join in its subselect. In this situation we *must* build a bushy
plan because there are no valid left-sided or right-sided join trees.
Accordingly, hoary sanity check needs an update. Per report from
Alessandro Depase.
planning to modify them itself. Otherwise we end up with shared RTE
substructure, which breaks inheritance_planner because the rte->inh
flag needs to be independent in each copied subquery. Per bug report
from Chris Piker.
and hash bucket-size estimation. Issue has been there awhile but is more
critical in 7.4 because it affects varchar columns. Per report from
Greg Stark.
Vars created to fill subplan args lists. This is an ancient error, going
back at least to 7.0, but is more easily triggered in 7.4 than before
because we no longer compare varlevelsup when deciding whether a Param
slot can be re-used. Fixes bug reported by Klint Gore.
the hashclauses field of the parent HashJoin. This avoids problems with
duplicated links to SubPlans in hash clauses, as per report from
Andrew Holm-Hansen.
pghackers proposal of 8-Nov. All the existing cross-type comparison
operators (int2/int4/int8 and float4/float8) have appropriate support.
The original proposal of storing the right-hand-side datatype as part of
the primary key for pg_amop and pg_amproc got modified a bit in the event;
it is easier to store zero as the 'default' case and only store a nonzero
when the operator is actually cross-type. Along the way, remove the
long-since-defunct bigbox_ops operator class.
Remove the 'strategy map' code, which was a large amount of mechanism
that no longer had any use except reverse-mapping from procedure OID to
strategy number. Passing the strategy number to the index AM in the
first place is simpler and faster.
This is a preliminary step in planned support for cross-datatype index
operations. I'm committing it now since the ScanKeyEntryInitialize()
API change touches quite a lot of files, and I want to commit those
changes before the tree drifts under me.
regression=# select 1 from tenk1 ta cross join tenk1 tb for update;
ERROR: no relation entry for relid 3
7.3 said "SELECT FOR UPDATE cannot be applied to a join", which was better
but still wrong, considering that 7.2 took the query just fine. Fix by
making transformForUpdate() ignore JOIN and other special RTE types,
rather than trying to mark them FOR UPDATE. The actual error message now
only appears if you explicitly name the join in FOR UPDATE.
sequence every time it's called is bogus --- it interferes with user
control over the seed, and actually decreases randomness overall
(because a seed based on time(NULL) is pretty predictable). If you really
want a reproducible result from geqo, do 'set seed = 0' before planning
a query.
be anything yielding an array of the proper kind, not only sub-ARRAY[]
constructs; do subscript checking at runtime not parse time. Also,
adjust array_cat to make array || array comply with the SQL99 spec.
Joe Conway
datatype by array_eq and array_cmp; use this to solve problems with memory
leaks in array indexing support. The parser's equality_oper and ordering_oper
routines also use the cache. Change the operator search algorithms to look
for appropriate btree or hash index opclasses, instead of assuming operators
named '<' or '=' have the right semantics. (ORDER BY ASC/DESC now also look
at opclasses, instead of assuming '<' and '>' are the right things.) Add
several more index opclasses so that there is no regression in functionality
for base datatypes. initdb forced due to catalog additions.
target columns in INSERT and UPDATE targetlists. Don't rely on resname
to be accurate in ruleutils, either. This fixes bug reported by
Donald Fraser, in which renaming a column referenced in a rule did not
work very well.
yet, though). Avoid using nth() to fetch tlist entries; provide a
common routine get_tle_by_resno() to search a tlist for a particular
resno. This replaces a couple uses of nth() and a dozen hand-coded
search loops. Also, replace a few uses of nth(length-1, list) with
llast().
subplan it starts with, as they may be needed at upper join levels.
See comments added to code for the non-obvious reason why. Per bug report
from Robert Creager.
query node, since that won't work unless the planner is upgraded.
Someday we should try to support at least some cases of this, but for
now just plug the hole in the dike. Per discussion with Dmitry Tkach.
instead of the former kluge whereby gram.y emitted already-transformed
expressions. This is needed so that Params appearing in these clauses
actually work correctly. I suppose some might claim that the side effect
of 'SELECT ... LIMIT 2+2' working is a new feature, but I say this is
a bug fix.
ANYELEMENT. The effect is to postpone typechecking of the function
body until runtime. Documentation is still lacking.
Original patch by Joe Conway, modified to postpone type checking
by Tom Lane.
node emits only those vars that are actually needed above it in the
plan tree. (There were comments in the code suggesting that this was
done at some point in the dim past, but for a long time we have just
made join nodes emit everything that either input emitted.) Aside from
being marginally more efficient, this fixes the problem noted by Peter
Eisentraut where a join above an IN-implemented-as-join might fail,
because the subplan targetlist constructed in the latter case didn't
meet the expectation of including everything.
Along the way, fix some places that were O(N^2) in the targetlist
length. This is not all the trouble spots for wide queries by any
means, but it's a step forward.
'scalar op ALL (array)', where the operator is applied between the
lefthand scalar and each element of the array. The operator must
yield boolean; the result of the construct is the OR or AND of the
per-element results, respectively.
Original coding by Joe Conway, after an idea of Peter's. Rewritten
by Tom to keep the implementation strictly separate from subqueries.