to give wrong results: it should be looking at inJoinSet not inFromCl.
Also, make 'modified' flag be local to ApplyRetrieveRule: we should
append a rule's quals to the query iff that particular rule applies,
not if we have fired any previously-considered rule for the query!
(SELECT FROM table*). Cause was reference to 'eref' field of an RTE,
which is null in an RTE loaded from a stored rule parsetree. There
wasn't any good reason to be touching the refname anyway...
table for an average of NTUP_PER_BUCKET tuples/bucket, but cost_hashjoin
was assuming a target load of one tuple/bucket. This was causing a
noticeable underestimate of hashjoin costs.
(LIKE and regexp matches). These are not yet referenced in pg_operator,
so by default the system will continue to use eqsel/neqsel.
Also, tweak convert_to_scalar() logic so that common prefixes of strings
are stripped off, allowing better accuracy when all strings in a table
share a common prefix.
subsequent elogs() in the same COPY operation to display the wrong
line number. Fix is to clear lineno only when elog level is such
that we will not return to caller.
all platforms, not just SCO. The operation is undefined for Unix-domain
sockets anyway. It seems SCO is not the only platform that complains
instead of treating the call as a no-op.
contained a sub-SELECT nested within an AND/OR tree that cnfify()
thought it should rearrange. Same physical sub-SELECT node could
end up linked into multiple places in resulting expression tree.
This is harmless for most node types, but not for SubLink.
Repair bug by making physical copies of subexpressions that get
logically duplicated by cnfify(). Also, tweak the heuristic that
decides whether it's a good idea to do cnfify() --- we don't really
want that to happen when it would cause multiple copies of a subselect
to be generated, I think.
whether to do fsync or not, and if so (which should be seldom) just
do the fsync immediately. This way we need not build data structures
in md.c/fd.c for blind writes.
logged queries to 1024, truncating longer queries. That is about half of
the size I need (I have a union that is 2K long). Can someone consider
bumping it to 4K or so? Patch attached...
Regards,
Ed Loehr
as a shared dirtybit for each shared buffer. The shared dirtybit still
controls writing the buffer, but the local bit controls whether we need
to fsync the buffer's file. This arrangement fixes a bug that allowed
some required fsyncs to be missed, and should improve performance as well.
For more info see my post of same date on pghackers.
than not knowing what they are at all. Perhaps they should have their own
type category? Hard to say. In the meantime, doing it this way allows
SELECT 'unknown' || 'unknown' to continue being resolved as textcat,
instead of spitting out an ambiguous-operator error.
parse node types. This allows these statements to be placed in a plpgsql
function. Also, see to it that statement types not handled by the copy
logic will draw an appropriate elog(ERROR), instead of leaving a null
pointer that will cause coredump later on. More utility statements could
be added if anyone felt like turning the crank.
Add a random number generator and seed setter (random(), SET SEED)
Fix up the interval*float8 math to carry partial months
into the time field.
Add float8*interval so we have symmetry in the available math.
Fix the parser and define.c to accept SQL92 types as field arguments.
Fix the parser to accept SQL92 types for CREATE TYPE, etc. This is
necessary to allow...
Bit/varbit support in contrib/bit cleaned up to compile and load
cleanly. Still needs some work before final release.
Implement the "SOME" keyword as a synonym for "ANY" per SQL92.
Implement ascii(text), ichar(int4), repeat(text,int4) to help
support the ODBC driver.
Enable the TRUNCATE() function mapping in the ODBC driver.
Ensure that outer tuple link needed for inner indexscan qual evaluation
gets set in the EvalPlanQual case. This stops coredump, but we still
have resource leaks due to failure to clean up EvalPlanQual properly...
xact abort state in pg_exec_query_dest, we should continue scanning the
querytree list, on the off chance that one of the later queries in the
string is COMMIT or ROLLBACK.
would crash, due to premature invocation of SetQuerySnapshot(). Clean
up problems with handling of multiple queries by splitting
pg_parse_and_plan into two routines. The old code would not, for
example, do the right thing with END; SELECT... submitted in one query
string when it had been in transaction abort state, because it'd decide
to skip planning the SELECT before it had executed the END. New
arrangement is simpler and doesn't force caller to plan if only
parse+rewrite is needed.
be an expression not just a simple Var, so long as only one table is
referenced (so that code isn't really any more difficult than before).
This whole thing is still fundamentally bogus, but at least we can accept
a few more cases than before.
WHERE in a place where it can be part of a nestloop inner indexqual.
As the code stood, it put the same physical sub-Plan node into both
indxqual and indxqualorig of the IndexScan plan node. That confused
later processing in the optimizer (which expected that tracing the
subPlan list would visit each subplan node exactly once), and would
probably have blown up in the executor if the planner hadn't choked first.
Fix by making the 'fixed' indexqual be a complete deep copy of the
original indexqual, rather than trying to share nodes below the topmost
operator node. This had further ramifications though, because we were
making the aforesaid list of sub-Plan nodes during SS_process_sublinks
which is run before construction of the 'fixed' indexqual, meaning that
the copy of the sub-Plan didn't show up in that list. Fix by rearranging
logic so that the sub-Plan list is built by the final set_plan_references
pass, not in SS_process_sublinks. This may sound like a mess, but it's
actually a good deal cleaner now than it was before, because we are no
longer dependent on the assumption that planning will never make a copy
of a sub-Plan node.
pg_internal.init file in-place, which meant that if another backend
started at about the same time, it might read the incomplete file.
init_irels tries to guard against that, but I have now seen a crash
due to reading bad data from a partly-written file. (This may indicate
a kernel bug on my platform? Not sure.) Anyway, clearly the safest
course is to write the new pg_internal.init file under a unique temporary
filename, and rename it into place only after it's all written.
In the event of an elog() while the mode was set to immediate write,
there was no way for it to be set back to the normal delayed write.
The mechanism was a waste of space and cycles anyway, since the only user
was varsup.c, which could perfectly well call FlushBuffer directly.
Now it does just that, and the notion of a write mode is gone.
single integers, and lists of names, without surrounding them with quotes.
Remove all tokens which are defined as operators from ColID and ColLabel
to avoid precedence confusion. Thanks to Tom Lane for catching this.
to next integer. Previously, if selectivity was small, we could compute
very tiny scan cost on the basis of estimating that only 0.001 tuple
would be fetched, which is silly. This naturally led to some rather
silly plans...
Move CREATE FUNCTION/WITH clause to end of statement to get around
shift/reduce conflicts with type names containing "WITH".
Add lots of tokens as allowed ColId's and/or ColLabel's,
so this should be a complete set for the v7.0 release.
keys lists of Constraint nodes. This eliminates a type pun that would
probably have caused trouble someday, and eliminates circular references
in the parsetree that were causing trouble now.
Also, change parser's uses of strcasecmp() to strcmp(). Since scan.l
has downcased any unquoted identifier, it is never correct to check an
identifier with strcasecmp() in the parser. For example,
CREATE TABLE FOO (f1 int, UNIQUE("F1"));
was accepted, which is wrong, and xlateSqlFunc did more than it should:
select datetime();
ERROR: Function 'timestamp()' does not exist
(good)
select "DateTime"();
ERROR: Function 'timestamp()' does not exist
(bad)
Clean up grotty coding in them, too. AFAICS from the CVS logs, these
have been broken since Postgres95, so I'm not going to insist on an
initdb to fix them now...