the partial index predicate in the scan's "recheck condition". Otherwise,
if the scan becomes lossy for lack of bitmap memory, we would fail to enforce
that returned rows satisfy the predicate. Noted while studying bug #2441
from Arjen van der Meijden.
condition: when there are multiple possible index paths involving
ScalarArrayOpExprs, they are logically to be ANDed together not ORed.
This thinko was a direct consequence of trying to put the processing
inside generate_bitmap_or_paths(), which I now see was a bit too cute.
So pull it out and make the callers do it separately (there are only two
that need it anyway). Partially responds to bug #2441 from Arjen van der Meijden.
There are some additional infelicities exposed by his example, but they
are also in 8.1.x, while this mistake is not.
sections now isn't nested. All user-defined functions now is
called outside critsections. Small improvements in WAL
protocol.
TODO: improve XLOG replay
always has been, because it's not got any .globl declaration! We've
been relying on the solaris_sparc.s code instead. Rip it out.
(Not back-patched, since this is just cosmetic cleanup.)
throw warnings for 100%-SQL-standard constructs, clean up some minor
infelicities, try to un-break ecpg to the best of my ability. (It's not clear
how ecpg is going to find out the setting of standard_conforming_strings,
though.) I think pg_dump still needs work, too.
(relpages/reltuples). To do this, create formal support in heapam.c for
"overwrite" tuple updates (including xlog replay capability) and use that
instead of the ad-hoc overwrites we'd been using in VACUUM and CREATE INDEX.
Take the responsibility for updating stats during CREATE INDEX out of the
individual index AMs, and do it where it belongs, in catalog/index.c. Aside
from being more modular, this avoids having to update the same tuple twice in
some paths through CREATE INDEX. It's probably not measurably faster, but
for sure it's a lot cleaner than before.
into a single mostly-physical-order scan of the index. This requires some
ticklish interlocking considerations, but should create no material
performance impact on normal index operations (at least given the
already-committed changes to make scans work a page at a time). VACUUM
itself should get significantly faster in any index that's degenerated to a
very nonlinear page order. Also, we save one pass over the index entirely,
except in the case where there were no deletions to do and so only one pass
happened anyway.
Original patch by Heikki Linnakangas, rework by Tom Lane.
btgettuple and btgetmulti). This eliminates the problem of "re-finding" the
exact stopping point, since the stopping point is effectively always a page
boundary, and index items are never moved across pre-existing page boundaries.
A small penalty is that the keys_are_unique optimization is effectively
disabled (and, therefore, is removed in this patch), causing us to apply
_bt_checkkeys() to at least one more tuple than necessary when looking up a
unique key. However, the advantages for non-unique cases seem great enough to
accept this tradeoff. Aside from simplifying and (sometimes) speeding up the
indexscan code, this will allow us to reimplement btbulkdelete as a largely
sequential scan instead of index-order traversal, thereby significantly
reducing the cost of VACUUM. Those changes will come in a separate patch.
Original patch by Heikki Linnakangas, rework by Tom Lane.
< * %Disallow changing default expression of a SERIAL column?
> * %Disallow changing DEFAULT expression of a SERIAL column?
472a473,476
> * Add DEFAULT .. AS OWNER so permission checks are done as the table
> owner
>
> This would be useful for SERIAL nextval() calls and CHECK constraints.
pg_freespacemap_relations --- while one could theoretically get that
number by counting rows in pg_freespacemap_pages, it's surely the hard
way to do it. Avoid expensive and inconvenient conversion to and from
text format. Minor code and docs cleanup.
it's not necessary to have three separate calls anymore. This patch also
fixes things so we don't try to read pg_internal.init until after we've
obtained lock on the target database; which was fairly harmless, but it's
certainly cleaner this way.