of a composite type to get that type's OID as their second parameter,
in place of typelem which is useless. The actual changes are mostly
centralized in getTypeInputInfo and siblings, but I had to fix a few
places that were fetching pg_type.typelem for themselves instead of
using the lsyscache.c routines. Also, I renamed all the related variables
from 'typelem' to 'typioparam' to discourage people from assuming that
they necessarily contain array element types.
In the past, we used a 'Lispy' linked list implementation: a "list" was
merely a pointer to the head node of the list. The problem with that
design is that it makes lappend() and length() linear time. This patch
fixes that problem (and others) by maintaining a count of the list
length and a pointer to the tail node along with each head node pointer.
A "list" is now a pointer to a structure containing some meta-data
about the list; the head and tail pointers in that structure refer
to ListCell structures that maintain the actual linked list of nodes.
The function names of the list API have also been changed to, I hope,
be more logically consistent. By default, the old function names are
still available; they will be disabled-by-default once the rest of
the tree has been updated to use the new API names.
7.4 rewrite for hashed aggregate support. If the transition data type
is pass-by-reference, the transValue must be pfreed when starting a new
group boundary, else we have a one-value-per-group leakage. Thanks to
Rae Steining for providing a reproducible test case.
Make btree index creation and initial validation of foreign-key constraints
use maintenance_work_mem rather than work_mem as their memory limit.
Add some code to guc.c to allow these variables to be referenced by their
old names in SHOW and SET commands, for backwards compatibility.
as well as the hash function (formerly the comparison function was hardwired
as memcmp()). This makes it possible to eliminate the special-purpose
hashtable management code in execGrouping.c in favor of using dynahash to
manage tuple hashtables; which is a win because dynahash knows how to expand
a hashtable when the original size estimate was too small, whereas the
special-purpose code was too stupid to do that. (See recent gripe from
Stephan Szabo about poor performance when hash table size estimate is way
off.) Free side benefit: when using string_hash, the default comparison
function is now strncmp() instead of memcmp(). This should eliminate some
part of the overhead associated with larger NAMEDATALEN values.
It also works to create a non-polymorphic aggregate from polymorphic
functions, should you want to do that. Regression test added, docs still
lacking. By Joe Conway, with some kibitzing from Tom Lane.
specific hash functions used by hash indexes, rather than the old
not-datatype-aware ComputeHashFunc routine. This makes it safe to do
hash joining on several datatypes that previously couldn't use hashing.
The sets of datatypes that are hash indexable and hash joinable are now
exactly the same, whereas before each had some that weren't in the other.
extensions to support our historical behavior. An aggregate belongs
to the closest query level of any of the variables in its argument,
or the current query level if there are no variables (e.g., COUNT(*)).
The implementation involves adding an agglevelsup field to Aggref,
and treating outer aggregates like outer variables at planning time.
when the plan is ReScanned, we don't have to rebuild the hash table
if there is no parameter change for its child node. This idea has
been used for a long time in Sort and Material nodes, but was not in
the hash code till now.
locParam lists can be converted to bitmapsets to speed updating. Also,
replace 'locParam' with 'allParam', which contains all the paramIDs
relevant to the node (i.e., the union of extParam and locParam); this
saves a step during SetChangedParamList() without costing anything
elsewhere.
a per-query memory context created by CreateExecutorState --- and destroyed
by FreeExecutorState. This provides a final solution to the longstanding
problem of memory leaked by various ExecEndNode calls.
execution state trees, and ExecEvalExpr takes an expression state tree
not an expression plan tree. The plan tree is now read-only as far as
the executor is concerned. Next step is to begin actually exploiting
this property.
so that all executable expression nodes inherit from a common supertype
Expr. This is somewhat of an exercise in code purity rather than any
real functional advance, but getting rid of the extra Oper or Func node
formerly used in each operator or function call should provide at least
a little space and speed improvement.
initdb forced by changes in stored-rules representation.
to plan nodes, not vice-versa. All executor state nodes now inherit from
struct PlanState. Copying of plan trees has been simplified by not
storing a list of SubPlans in Plan nodes (eliminating duplicate links).
The executor still needs such a list, but it can build it during
ExecutorStart since it has to scan the plan tree anyway.
No initdb forced since no stored-on-disk structures changed, but you
will need a full recompile because of node-numbering changes.
operations: make sure we use operators that are compatible, as determined
by a mergejoin link in pg_operator. Also, add code to planner to ensure
we don't try to use hashed grouping when the grouping operators aren't
marked hashable.
of groups produced by GROUP BY. This improves the accuracy of planning
estimates for grouped subselects, and is needed to check whether a
hashed aggregation plan risks memory overflow.
node now does its own grouping of the input rows, and has no need for a
preceding GROUP node in the plan pipeline. This allows elimination of
the misnamed tuplePerGroup option for GROUP, and actually saves more code
in nodeGroup.c than it costs in nodeAgg.c, as well as being presumably
faster. Restructure the API of query_planner so that we do not commit to
using a sorted or unsorted plan in query_planner; instead grouping_planner
makes the decision. (Right now it isn't any smarter than query_planner
was, but that will change as soon as it has the option to select a hash-
based aggregation step.) Despite all the hackery, no initdb needed since
only in-memory node types changed.
just the significant fields of FunctionCallInfoData, rather than MemSet'ing
the whole struct to zero. Unused positions in the arg[] array will
thereby contain garbage rather than zeroes. This buys back some of the
performance hit from increasing FUNC_MAX_ARGS. Also tweak tuplesort.c
code for more speed by marking some routines 'inline'. All together
these changes speed up simple sorts, like count(distinct int4column),
by about 25% on a P4 running RH Linux 7.2.
to be flexible about assignment casts without introducing ambiguity in
operator/function resolution. Introduce a well-defined promotion hierarchy
for numeric datatypes (int2->int4->int8->numeric->float4->float8).
Change make_const to initially label numeric literals as int4, int8, or
numeric (never float8 anymore).
Explicitly mark Func and RelabelType nodes to indicate whether they came
from a function call, explicit cast, or implicit cast; use this to do
reverse-listing more accurately and without so many heuristics.
Explicit casts to char, varchar, bit, varbit will truncate or pad without
raising an error (the pre-7.2 behavior), while assigning to a column without
any explicit cast will still raise an error for wrong-length data like 7.3.
This more nearly follows the SQL spec than 7.2 behavior (we should be
reporting a 'completion condition' in the explicit-cast cases, but we have
no mechanism for that, so just do silent truncation).
Fix some problems with enforcement of typmod for array elements;
it didn't work at all in 'UPDATE ... SET array[n] = foo', for example.
Provide a generalized array_length_coerce() function to replace the
specialized per-array-type functions that used to be needed (and were
missing for NUMERIC as well as all the datetime types).
Add missing conversions int8<->float4, text<->numeric, oid<->int8.
initdb forced.
qualified operator names directly, for example CREATE OPERATOR myschema.+
( ... ). To qualify an operator name in an expression you need to write
OPERATOR(myschema.+) (thanks to Peter for suggesting an escape hatch).
I also took advantage of having to reformat pg_operator to fix something
that'd been bugging me for a while: mergejoinable operators should have
explicit links to the associated cross-data-type comparison operators,
rather than hardwiring an assumption that they are named < and >.
entries, per pghackers discussion. This fixes aggregates to live in
namespaces, and also simplifies/speeds up lookup in parse_func.c.
Also, add a 'proimplicit' flag to pg_proc that controls whether a type
coercion function may be invoked implicitly, or only explicitly. The
current settings of these flags are more permissive than I would like,
but we will need to debate and refine the behavior; for now, I avoided
breaking regression tests as much as I could.
now just below FATAL in server_min_messages. Added more text to
highlight ordering difference between it and client_min_messages.
---------------------------------------------------------------------------
REALLYFATAL => PANIC
STOP => PANIC
New INFO level the prints to client by default
New LOG level the prints to server log by default
Cause VACUUM information to print only to the client
NOTICE => INFO where purely information messages are sent
DEBUG => LOG for purely server status messages
DEBUG removed, kept as backward compatible
DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1 added
DebugLvl removed in favor of new DEBUG[1-5] symbols
New server_min_messages GUC parameter with values:
DEBUG[5-1], INFO, NOTICE, ERROR, LOG, FATAL, PANIC
New client_min_messages GUC parameter with values:
DEBUG[5-1], LOG, INFO, NOTICE, ERROR, FATAL, PANIC
Server startup now logged with LOG instead of DEBUG
Remove debug_level GUC parameter
elog() numbers now start at 10
Add test to print error message if older elog() values are passed to elog()
Bootstrap mode now has a -d that requires an argument, like postmaster
1. If there is exactly one pg_operator entry of the right name and oprkind,
oper() and related routines would return that entry whether its input type
had anything to do with the request or not. This is just premature
optimization: we shouldn't return the single candidate until after we verify
that it really is a valid candidate, ie, is at least coercion-compatible
with the given types.
2. oper() and related routines only promise a coercion-compatible result.
Unfortunately, there were quite a few callers that assumed the returned
operator is binary-compatible with the given datatype; they would proceed
to call it without making any datatype coercions. These callers include
sorting, grouping, aggregation, and VACUUM ANALYZE. In general I think
it is appropriate for these callers to require an exact or binary-compatible
match, so I've added a new routine compatible_oper() that only succeeds if
it can find an operator that doesn't require any run-time conversions.
Callers now call oper() or compatible_oper() depending on whether they are
prepared to deal with type conversion or not.
The upshot of these bugs is revealed by the following silliness in PL/Tcl's
selftest: it creates an operator @< on int4, and then tries to use it to
sort a char(N) column. The system would let it do that :-( (and evidently
has done so since 6.3 :-( :-(). The result in this case was just a silly
sort order, but the reverse combination would've provoked coredump from
trying to dereference integers. With this fix you get more reasonable
behavior:
pltcl_test=# select * from T_pkey1 order by key1, key2 using @<;
ERROR: Unable to identify an operator '@<' for types 'bpchar' and 'bpchar'
You will have to retype this query using an explicit cast