node now does its own grouping of the input rows, and has no need for a
preceding GROUP node in the plan pipeline. This allows elimination of
the misnamed tuplePerGroup option for GROUP, and actually saves more code
in nodeGroup.c than it costs in nodeAgg.c, as well as being presumably
faster. Restructure the API of query_planner so that we do not commit to
using a sorted or unsorted plan in query_planner; instead grouping_planner
makes the decision. (Right now it isn't any smarter than query_planner
was, but that will change as soon as it has the option to select a hash-
based aggregation step.) Despite all the hackery, no initdb needed since
only in-memory node types changed.
types for Table Functions, as previously proposed on HACKERS. Here is a
brief explanation:
1. Creates a new pg_type typtype: 'p' for pseudo type (currently either
'b' for base or 'c' for catalog, i.e. a class).
2. Creates new builtin type of typtype='p' named RECORD. This is the
first of potentially several pseudo types.
3. Modify FROM clause grammer to accept:
SELECT * FROM my_func() AS m(colname1 type1, colname2 type1, ...)
where m is the table alias, colname1, etc are the column names, and
type1, etc are the column types.
4. When typtype == 'p' and the function return type is RECORD, a list
of column defs is required, and when typtype != 'p', it is
disallowed.
5. A check was added to ensure that the tupdesc provide via the parser
and the actual return tupdesc match in number and type of
attributes.
When creating a function you can do:
CREATE FUNCTION foo(text) RETURNS setof RECORD ...
When using it you can do:
SELECT * from foo(sqlstmt) AS (f1 int, f2 text, f3 timestamp)
or
SELECT * from foo(sqlstmt) AS f(f1 int, f2 text, f3 timestamp)
or
SELECT * from foo(sqlstmt) f(f1 int, f2 text, f3 timestamp)
Included in the patches are adjustments to the regression test sql and
expected files, and documentation.
p.s.
This potentially solves (or at least improves) the issue of builtin
Table Functions. They can be bootstrapped as returning RECORD, and
we can wrap system views around them with properly specified column
defs. For example:
CREATE VIEW pg_settings AS
SELECT s.name, s.setting
FROM show_all_settings()AS s(name text, setting text);
Then we can also add the UPDATE RULE that I previously posted to
pg_settings, and have pg_settings act like a virtual table, allowing
settings to be queried and set.
Joe Conway
yesterday's proposal to pghackers. Also remove unnecessary parameters
to heap_beginscan, heap_rescan. I modified pg_proc.h to reflect the
new numbers of parameters for the AM interface routines, but did not
force an initdb because nothing actually looks at those fields.
some kibitzing from Tom Lane. Not everything works yet, and there's
no documentation or regression test, but let's commit this so Joe
doesn't need to cope with tracking changes in so many files ...
Note: I didn't force an initdb, figuring that one today was enough.
However, there is a new function in pg_proc.h, and pg_dump won't be
able to dump partial indexes until you add that function.
report on old-style functions invoked by RI triggers. We had a number of
other places that were being sloppy about which memory context FmgrInfo
subsidiary data will be allocated in. Turns out none of them actually
cause a problem in 7.1, but this is for arcane reasons such as the fact
that old-style triggers aren't supported anyway. To avoid getting burnt
later, I've restructured the trigger support so that we don't keep trigger
FmgrInfo structs in relcache memory. Some other related cleanups too:
it's not really necessary to call fmgr_info at all while setting up
the index support info in relcache entries, because those ScanKeyEntry
structs are never used to invoke the functions. This should speed up
relcache initialization a tiny bit.
the same tuple slot that the raw tuple came from, because that slot has
the wrong tuple descriptor. Store it into its own slot with the correct
descriptor, instead. This repairs problems with SPI functions seeing
inappropriate tuple descriptors --- for example, plpgsql code failing to
cope with SELECT FOR UPDATE.
trees (mostly my fault). Repair. Also fix long-standing bug in ExecReplace:
after recomputing a concurrently updated tuple, we must recheck constraints.
Make EvalPlanQual leak memory with somewhat less enthusiasm than before,
although plugging leaks fully will require more changes than I care to risk
in a dot-release.
a separate statement (though it can still be invoked as part of VACUUM, too).
pg_statistic redesigned to be more flexible about what statistics are
stored. ANALYZE now collects a list of several of the most common values,
not just one, plus a histogram (not just the min and max values). Random
sampling is used to make the process reasonably fast even on very large
tables. The number of values and histogram bins collected is now
user-settable via an ALTER TABLE command.
There is more still to do; the new stats are not being used everywhere
they could be in the planner. But the remaining changes for this project
should be localized, and the behavior is already better than before.
A not-very-related change is that sorting now makes use of btree comparison
routines if it can find one, rather than invoking '<' twice.
allocated by plan nodes are not leaked at end of query. This doesn't
really matter for normal queries, but it sure does for queries invoked
repetitively inside SQL functions. Clean up some other grotty code
associated with tupdescs, and fix a few other memory leaks exposed by
tests with simple SQL functions.
joins, and clean things up a good deal at the same time. Append plan node
no longer hacks on rangetable at runtime --- instead, all child tables are
given their own RT entries during planning. Concept of multiple target
tables pushed up into execMain, replacing bug-prone implementation within
nodeAppend. Planner now supports generating Append plans for inheritance
sets either at the top of the plan (the old way) or at the bottom. Expanding
at the bottom is appropriate for tables used as sources, since they may
appear inside an outer join; but we must still expand at the top when the
target of an UPDATE or DELETE is an inheritance set, because we actually need
a different targetlist and junkfilter for each target table in that case.
Fortunately a target table can't be inside an outer join... Bizarre mutual
recursion between union_planner and prepunion.c is gone --- in fact,
union_planner doesn't really have much to do with union queries anymore,
so I renamed it grouping_planner.
ExecutorRun. This allows LIMIT to work in a view. Also, LIMIT in a
cursor declaration will behave in a reasonable fashion, whereas before
it was overridden by the FETCH count.
SQL92 semantics, including support for ALL option. All three can be used
in subqueries and views. DISTINCT and ORDER BY work now in views, too.
This rewrite fixes many problems with cross-datatype UNIONs and INSERT/SELECT
where the SELECT yields different datatypes than the INSERT needs. I did
that by making UNION subqueries and SELECT in INSERT be treated like
subselects-in-FROM, thereby allowing an extra level of targetlist where the
datatype conversions can be inserted safely.
INITDB NEEDED!
(Don't forget that an alias is required.) Views reimplemented as expanding
to subselect-in-FROM. Grouping, aggregates, DISTINCT in views actually
work now (he says optimistically). No UNION support in subselects/views
yet, but I have some ideas about that. Rule-related permissions checking
moved out of rewriter and into executor.
INITDB REQUIRED!
for example, an SQL function can be used in a functional index. (I make
no promises about speed, but it'll work ;-).) Clean up and simplify
handling of functions returning sets.
right circumstances a hash join executed as a DECLARE CURSOR/FETCH
query would crash the backend. Problem as seen in current sources was
that the hash tables were stored in a context that was a child of
TransactionCommandContext, which got zapped at completion of the FETCH
command --- but cursor cleanup executed at COMMIT expected the tables
to still be valid. I haven't chased down the details as seen in 7.0.*
but I'm sure it's the same general problem.
right thing with variable-free clauses that contain noncachable functions,
such as 'WHERE random() < 0.5' --- these are evaluated once per
potential output tuple. Expressions that contain only Params are
now candidates to be indexscan quals --- for example, 'var = ($1 + 1)'
can now be indexed. Cope with RelabelType nodes atop potential indexscan
variables --- this oversight prevents 7.0.* from recognizing some
potentially indexscanable situations.
thing when there are multiple result relations. Formerly, during
something like 'UPDATE foo*', foo's constraints and *only* foo's
constraints would be applied to all foo's children. Wrong-o ...
pass-by-ref data types --- eg, an index on lower(textfield) --- no longer
leak memory during index creation or update. Clean up a lot of redundant
code ... did you know that copy, vacuum, truncate, reindex, extend index,
and bootstrap each basically duplicated the main executor's logic for
extracting information about an index and preparing index entries?
Functional indexes should be a little faster now too, due to removal
of repeated function lookups.
CREATE INDEX 'opt_type' clause is deimplemented by these changes,
but I haven't removed it from the parser yet (need to merge with
Thomas' latest change set first).
memory contexts. Currently, only leaks in expressions executed as
quals or projections are handled. Clean up some old dead cruft in
executor while at it --- unused fields in state nodes, that sort of thing.
materialized tupleset is small enough) instead of a temporary relation.
This was something I was thinking of doing anyway for performance, and Jan
says he needs it for TOAST because he doesn't want to cope with toasting
noname relations. With this change, the 'noname table' support in heap.c
is dead code, and I have accordingly removed it. Also clean up 'noname'
plan handling in planner --- nonames are either sort or materialize plans,
and it seems less confusing to handle them separately under those names.
SELECT DISTINCT ON (expr [, expr ...]) targetlist ...
and there is a check to make sure that the user didn't specify an ORDER BY
that's incompatible with the DISTINCT operation.
Reimplement nodeUnique and nodeGroup to use the proper datatype-specific
equality function for each column being compared --- they used to do
bitwise comparisons or convert the data to text strings and strcmp().
(To add insult to injury, they'd look up the conversion functions once
for each tuple...) Parse/plan representation of DISTINCT is now a list
of SortClause nodes.
initdb forced by querytree change...
a generalized module 'tuplesort.c' that can sort either HeapTuples or
IndexTuples, and is not tied to execution of a Sort node. Clean up
memory leakages in sorting, and replace nbtsort.c's private implementation
of mergesorting with calls to tuplesort.c.
with no input rows, per pghackers discussions around 7/22/99. Clean up
a bunch of ugly coding while at it; remove redundant re-lookup of
aggregate info at start of each new GROUP. Arrange to pfree intermediate
values when they are pass-by-ref types, so that aggregates on pass-by-ref
types no longer eat memory. This takes care of a couple of TODO items...
* Buffer refcount cleanup (per my "progress report" to pghackers, 9/22).
* Add links to backend PROC structs to sinval's array of per-backend info,
and use these links for routines that need to check the state of all
backends (rather than the slow, complicated search of the ShmemIndex
hashtable that was used before). Add databaseOID to PROC structs.
* Use this to implement an interlock that prevents DESTROY DATABASE of
a database containing running backends. (It's a little tricky to prevent
a concurrently-starting backend from getting in there, since the new
backend is not able to lock anything at the time it tries to look up
its database in pg_database. My solution is to recheck that the DB is
OK at the end of InitPostgres. It may not be a 100% solution, but it's
a lot better than no interlock at all...)
* In ALTER TABLE RENAME, flush buffers for the relation before doing the
rename of the physical files, to ensure we don't get failures later from
mdblindwrt().
* Update TRUNCATE patch so that it actually compiles against current
sources :-(.
You should do "make clean all" after pulling these changes.
sort order down into planner, instead of handling it only at the very top
level of the planner. This fixes many things. An explicit sort is now
avoided if there is a cheaper alternative (typically an indexscan) not
only for ORDER BY, but also for the internal sort of GROUP BY. It works
even when there is no other reason (such as a WHERE condition) to consider
the indexscan. It works for indexes on functions. It works for indexes
on functions, backwards. It's just so cool...
CAUTION: I have changed the representation of SortClause nodes, therefore
THIS UPDATE BREAKS STORED RULES. You will need to initdb.