mentioned in FROM but not elsewhere in the query: such tables should be
joined over anyway. Aside from being more standards-compliant, this allows
removal of some very ugly hacks for COUNT(*) processing. Also, allow
HAVING clause without aggregate functions, since SQL does. Clean up
CREATE RULE statement-list syntax the same way Bruce just fixed the
main stmtmulti production.
CAUTION: addition of a field to RangeTblEntry nodes breaks stored rules;
you will have to initdb if you have any rules.
Implements the CREATE CONSTRAINT TRIGGER and SET CONSTRAINTS commands.
TODO:
Generic builtin trigger procedures
Automatic execution of appropriate CREATE CONSTRAINT... at CREATE TABLE
Support of new trigger type in pg_dump
Swapping of huge # of events to disk
Jan
an empty targetlist *and* fails to return any tuples, as will happen
for example with 'SELECT COUNT(1) FROM table WHERE ...' if the where-
clause selects no tuples. It's so nice to make a fix by diking out code,
instead of adding more...
with no input rows, per pghackers discussions around 7/22/99. Clean up
a bunch of ugly coding while at it; remove redundant re-lookup of
aggregate info at start of each new GROUP. Arrange to pfree intermediate
values when they are pass-by-ref types, so that aggregates on pass-by-ref
types no longer eat memory. This takes care of a couple of TODO items...
Frankpitt, plus some improvements from yours truly. The simplifier depends
on the proiscachable field of pg_proc to tell it whether a function is
safe to pre-evaluate --- things like nextval() are not, for example.
Update pg_proc.h to contain reasonable cacheability information; as of
6.5.* hardly any functions were marked cacheable. I may have erred too
far in the other direction; see recent mail to pghackers for more info.
This update does not force an initdb, exactly, but you won't see much
benefit from the simplifier until you do one.
* Buffer refcount cleanup (per my "progress report" to pghackers, 9/22).
* Add links to backend PROC structs to sinval's array of per-backend info,
and use these links for routines that need to check the state of all
backends (rather than the slow, complicated search of the ShmemIndex
hashtable that was used before). Add databaseOID to PROC structs.
* Use this to implement an interlock that prevents DESTROY DATABASE of
a database containing running backends. (It's a little tricky to prevent
a concurrently-starting backend from getting in there, since the new
backend is not able to lock anything at the time it tries to look up
its database in pg_database. My solution is to recheck that the DB is
OK at the end of InitPostgres. It may not be a 100% solution, but it's
a lot better than no interlock at all...)
* In ALTER TABLE RENAME, flush buffers for the relation before doing the
rename of the physical files, to ensure we don't get failures later from
mdblindwrt().
* Update TRUNCATE patch so that it actually compiles against current
sources :-(.
You should do "make clean all" after pulling these changes.
additional argument specifying the kind of lock to acquire/release (or
'NoLock' to do no lock processing). Ensure that all relations are locked
with some appropriate lock level before being examined --- this ensures
that relevant shared-inval messages have been processed and should prevent
problems caused by concurrent VACUUM. Fix several bugs having to do with
mismatched increment/decrement of relation ref count and mismatched
heap_open/close (which amounts to the same thing). A bogus ref count on
a relation doesn't matter much *unless* a SI Inval message happens to
arrive at the wrong time, which is probably why we got away with this
sloppiness for so long. Repair missing grab of AccessExclusiveLock in
DROP TABLE, ALTER/RENAME TABLE, etc, as noted by Hiroshi.
Recommend 'make clean all' after pulling this update; I modified the
Relation struct layout slightly.
Will post further discussion to pghackers list shortly.
documented intepretation of the lefthand and oper fields. Fix a number of
obscure problems while at it --- for example, the old code failed if the parser
decided to insert a type-coercion function just below the operator of a
SubLink.
CAUTION: this will break stored rules that contain subplans. You may
need to initdb.
and fix_opids processing to a single recursive pass over the plan tree
executed at the very tail end of planning, rather than haphazardly here
and there at different places. Now that tlist Vars do not get modified
until the very end, it's possible to get rid of the klugy var_equal and
match_varid partial-matching routines, and just use plain equal()
throughout the optimizer. This is a step towards allowing merge and
hash joins to be done on expressions instead of only Vars ...
sort order down into planner, instead of handling it only at the very top
level of the planner. This fixes many things. An explicit sort is now
avoided if there is a cheaper alternative (typically an indexscan) not
only for ORDER BY, but also for the internal sort of GROUP BY. It works
even when there is no other reason (such as a WHERE condition) to consider
the indexscan. It works for indexes on functions. It works for indexes
on functions, backwards. It's just so cool...
CAUTION: I have changed the representation of SortClause nodes, therefore
THIS UPDATE BREAKS STORED RULES. You will need to initdb.
contains much code that looks like it will handle indexquals with the index
key on either side of the operator, in fact indexquals must have the index
key on the left because of limitations of the ScanKey machinery. Perhaps
someone will be motivated to fix that someday...
> >
> > was implemented by Jan Wieck.
> > His work is for ascending order cases.
> >
> > Here is a patch to prevent sorting also in descending
> > order cases.
> > Because I had already changed _bt_first() to position
> > backward correctly before v6.5,this patch would work.
> >
Hiroshi Inoue
Inoue@tpf.co.jp
of the SELECT part of the statement is just like a plain SELECT. All
INSERT-specific processing happens after the SELECT parsing is done.
This eliminates many problems, e.g. INSERT ... SELECT ... GROUP BY using
the wrong column labels. Ensure that DEFAULT clauses are coerced to
the target column type, whether or not stored clause produces the right
type. Substantial cleanup of parser's array support.
pointer to palloc'd but uninitialized memory. This is not cool; anyone looking
at the returned 'tuple' would at best coredump and at worst behave in a
bizarre and irreproducible way. Fix it to return a predictable value,
namely a correctly-set-up palloc'd tuple containing zero attributes.
I believe this fix is both safe and critical.
this one could be useful for people experiencing out-of-memory crashes while
executing queries which retrieve or use a very large number of tuples.
The problem happens when storage is allocated for functions results used in
a large query, for example:
select upper(name) from big_table;
select big_table.array[1] from big_table;
select count(upper(name)) from big_table;
This patch is a dirty hack that fixes the out-of-memory problem for the most
common cases, like the above ones. It is not the final solution for the
problem but it can work for some people, so I'm posting it.
The patch should be safe because all changes are under #ifdef. Furthermore
the feature can be enabled or disabled at runtime by the `free_tuple_memory'
options in the pg_options file. The option is disabled by default and must
be explicitly enabled at runtime to have any effect.
To enable the patch add the follwing line to Makefile.custom:
CUSTOM_COPT += -DFREE_TUPLE_MEMORY
To enable the option at runtime add the following line to pg_option:
free_tuple_memory=1
Massimo
after ExecEndNode. It must be done! Or we'll be out of free
tuple slots very soon, though slots are freed by ExecEndNode
and ready for reusing.
We didn't see this problem before because of
int nSlots = ExecCountSlotsNode(plan);
TupleTable tupleTable = ExecCreateTupleTable(nSlots + 10);
/* why add ten? - jolly */
code in InitPlan - i.e. extra 10 slots. Simple select uses
3 slots and so it was possible to re-use evaluation plan
3 additional times and didn't get
elog(NOTICE, "Plan requires more slots than are available");
elog(ERROR, "send mail to your local executor guru to fix this");
Changes are obvious and shouldn't be problems with them.
Though, I added Assert(epqstate->es_tupleTable->next == 0)
before EvalPlanQual():ExecInitNode and we'll notice if
something is still wrong. Is it better to change Assert
to elog(ERROR) ?
fixed-size hashtable. This should prevent 'hashtable out of memory' errors,
unless you really do run out of memory. Note: target size for hashtable
is now taken from -S postmaster switch, not -B, since it is local memory
in the backend rather than shared memory.
lists are now plain old garden-variety Lists, allocated with palloc,
rather than specialized expansible-array data allocated with malloc.
This substantially simplifies their handling and eliminates several
sources of memory leakage.
Several basic types of erroneous queries (syntax error, attempt to
insert a duplicate key into a unique index) now demonstrably leak
zero bytes per query.
about certain to fail anytime it decided the relation to be hashed was
too big to fit in memory --- the code for 'batching' a series of hashjoins
had multiple errors. I've fixed the easier problems. A remaining big
problem is that you can get 'hashtable out of memory' if the code's
guesstimate about how much overflow space it will need turns out wrong.
That will require much more extensive revisions to fix, so I'm committing
these fixes now before I start on that problem.
expression context (ie, not at the top level of a WHERE clause). Examples
like this one work now:
SELECT name, value FROM t1 as touter WHERE
(value/(SELECT AVG(value) FROM t1 WHERE name = touter.name)) > 0.75;
indexes.
1. Index Scan using plural indexids never scan backward
as to the order of indexids.
2. The cursor using Index scan is not usable after moving
past the end.
This patch solves above bugs.
Moreover the change of _bt_first() would be useful to extend
ORDER BY patch by Jan Wieck for all descending order cases.
Hiroshi Inoue
hashjoin's hashFunc() so that it does the right thing with pass-by-value
data types (the old code would always return 0 for int2 or char values,
which would work but would slow things down a lot). Extend opr_sanity
regress test to catch more kinds of errors.
change functionality, but makes the code more ANSI C'ish.
My AIX xlc compiler barfs on all of these. Can someone please
review and apply to current.
<<port.patch>>
Thanks
Andreas