Studio 2005. Basically MS defined errcode in the headers with a typedef,
so we have to #define it out of the way.
While at it, fix a function declaration in plpython that didn't match
the implementation (volatile missing).
Magnus Hagander
discussion (including making def_arg allow reserved words), add missed
opt_definition for UNIQUE case. Put the reloptions support code in a less
random place (I chose to make a new file access/common/reloptions.c).
Eliminate header inclusion creep. Make the index options functions safely
user-callable (seems like client apps might like to be able to test validity
of options before trying to make an index). Reduce overhead for normal case
with no options by allowing rd_options to be NULL. Fix some unmaintainably
klugy code, including getting rid of Natts_pg_class_fixed at long last.
Some stylistic cleanup too, and pay attention to keeping comments in sync
with code.
Documentation still needs work, though I did fix the omissions in
catalogs.sgml and indexam.sgml.
the read lock we hold on the table's parent relation until commit.
Update equalfuncs.c for the new field in AlterTableCmd. Various
improvements to comments, variable names, and error reporting.
There is room for further improvement here, but this is at least
a step in the right direction.
Open items:
There were a few tangentially related issues that have come up that I think
are TODOs. I'm likely to tackle one or two of these next so I'm interested in
hearing feedback on them as well.
. Constraints currently do not know anything about inheritance. Tom suggested
adding a coninhcount and conislocal like attributes have to track their
inheritance status.
. Foreign key constraints currently do not get copied to new children (and
therefore my code doesn't verify them). I don't think it would be hard to
add them and treat them like CHECK constraints.
. No constraints at all are copied to tables defined with LIKE. That makes it
hard to use LIKE to define new partitions. The standard defines LIKE and
specifically says it does not copy constraints. But the standard already has
an option called INCLUDING DEFAULTS; we could always define a non-standard
extension LIKE table INCLUDING CONSTRAINTS that gives the user the option to
request a copy including constraints.
. Personally, I think the whole attislocal thing is bunk. The decision about
whether to drop a column from children tables or not is something that
should be up to the user and trying to DWIM based on whether there was ever
a local definition or the column was acquired purely through inheritance is
hardly ever going to match up with user expectations.
. And of course there's the whole unique and primary key constraint issue. I
think to get any traction at all on this you have a prerequisite of a real
partitioned table implementation where the system knows what the partition
key is so it can recognize when it's a leading part of an index key.
Greg Stark
ScalarArrayOpExpr index quals: we were estimating the right total
number of rows returned, but treating the index-access part of the
cost as if a single scan were fetching that many consecutive index
tuples. Actually we should treat it as a multiple indexscan, and
if there are enough of 'em the Mackert-Lohman discount should kick in.
clauses containing no variables and no volatile functions. Such a clause
can be used as a one-time qual in a gating Result plan node, to suppress
plan execution entirely when it is false. Even when the clause is true,
putting it in a gating node wins by avoiding repeated evaluation of the
clause. In previous PG releases, query_planner() would do this for
pseudoconstant clauses appearing at the top level of the jointree, but
there was no ability to generate a gating Result deeper in the plan tree.
To fix it, get rid of the special case in query_planner(), and instead
process pseudoconstant clauses through the normal RestrictInfo qual
distribution mechanism. When a pseudoconstant clause is found attached to
a path node in create_plan(), pull it out and generate a gating Result at
that point. This requires special-casing pseudoconstants in selectivity
estimation and cost_qual_eval, but on the whole it's pretty clean.
It probably even makes the planner a bit faster than before for the normal
case of no pseudoconstants, since removing pull_constant_clauses saves one
useless traversal of the qual tree. Per gripe from Phil Frost.
be delivered directly to the collector process. The extra process context
swaps required to transfer data through the buffer process seem to outweigh
any value the buffering might have. Per recent discussion and tests.
I modified Bruce's draft patch to use poll() rather than select() where
available (this makes a noticeable difference on my system), and fixed
up the EXEC_BACKEND case.
the order in which it visits tables is not dependent on the physical order
of pg_constraint entries, and neither are the error messages it gives.
This should correct recently-noticed instability in regression tests.
tuple hash table entries. This addresses the problem previously noted
that use of a 'physical tlist' in the input scan node could bloat the
hash table entries far beyond what the planner expects. It's a better
answer than my previous thought of undoing the physical tlist optimization,
because we can also remove columns that are needed to compute the aggregate
functions but aren't part of the grouping column set.
* new split algorithm (as proposed in http://archives.postgresql.org/pgsql-hackers/2006-06/msg00254.php)
* possible call pickSplit() for second and below columns
* add spl_(l|r)datum_exists to GIST_SPLITVEC -
pickSplit should check its values to use already defined
spl_(l|r)datum for splitting. pickSplit should set
spl_(l|r)datum_exists to 'false' (if they was 'true') to
signal to caller about using spl_(l|r)datum.
* support for old pickSplit(): not very optimal
but correct split
* remove 'bytes' field from GISTENTRY: in any case size of
value is defined by it's type.
* split GIST_SPLITVEC to two structures: one for using in picksplit
and second - for internal use.
* some code refactoring
* support of subsplit to rtree opclasses
TODO: add support of subsplit to contrib modules
this someday, but right now it seems that posix_fadvise is immature to
the point of being broken on many platforms ... and we don't have any
benchmark evidence proving it's worth spending time on.
per-tuple space overhead for sorts in memory. I chose to replace the
previous patch that tried to write out the bare minimum amount of data
when sorting on disk; instead, just dump the MinimalTuples as-is. This
wastes 3 to 10 bytes per tuple depending on architecture and null-bitmap
length, but the simplification in the writetup/readtup routines seems
worth it.
analyzing, so that future analyze threshold calculations don't get confused.
Also, make sure we correctly track the decrease of live tuples cause by
deletes.
Per report from Dylan Hansen, patches by Tom Lane and me.
tuples with less header overhead than a regular HeapTuple, per my
recent proposal. Teach TupleTableSlot code how to deal with these.
As proof of concept, change tuplestore.c to store MinimalTuples instead
of HeapTuples. Future patches will expand the concept to other places
where it is useful.
will be expanded to a list of their member fields, rather than creating
a nested rowtype field as formerly. (The old behavior is still available
by omitting '.*'.) This syntax is not allowed by the SQL spec AFAICS,
so changing its behavior doesn't violate the spec. The new behavior is
substantially more useful since it allows, for example, triggers to check
for data changes with 'if row(new.*) is distinct from row(old.*)'. Per
my recent proposal.
palloc() will normally round allocation requests up to the next power of 2,
so make dynahash choose allocation sizes that are as close to a power of 2
as possible.
Back-patch to 8.1 --- the problem exists further back, but a much larger
patch would be needed and it doesn't seem worth taking any risks.
opposed to what other versions apparently do, so it's not safe to print an
error message. Besides, getopt_long itself already did, so it's redundant
anyway.
PGDG:
> Yes. In fact the copyright belongs to credativ GmbH the company that
> paid Carsten for his work. As you may or may not know I'm the CEO of
> that company and can assure you that his work was contributed to the
> PostgreSQL project.
After updating to the latest cvs, and also building most of the addons
(like PLs), the following patch is neededf for win32 + Visual C++.
* Switch to use the new win32 semaphore code
* Rename win32_open to pgwin32_open. win32_open collides with symbols
defined in Perl. MingW didn't detect ig, MSVC did. And it's a bit too
generic a name to export globally, imho...
* Python defines some partially broken #pragmas in the headers when
doing a debug build. Workaround.
Magnus Hagander