in cases of qualified rules as well as unqualified ones. Tweak rules
test to avoid cluttering output with dummy SELECT results. Update
documentation to match code.
constraint. This case (a) is useless, (b) violates SQL92, and
(c) is certain to cause a failure downstream when we try to create
an index with duplicated column names. So give an appropriate error
message instead of letting the index failure occur. Per report from
Colin Strickland. NOTE: currently, CREATE INDEX fooi ON foo(f1,f1)
still fails with 'cannot insert duplicate key' error. Should we
change that too? What about functional indexes?
useful as yet, since its primary source of information is (full) VACUUM,
which makes a concerted effort to get rid of free space before telling
the map about it ... next stop is concurrent VACUUM ...
immediately, we will fork a child even if the database state does not
permit connections to be accepted (eg, we are in recovery mode).
The child process will correctly reject the connection and exit as
soon as it's finished collecting the connection request message.
However, this means that reaper() must be prepared to see child
process exit signals even while it's waiting for startup or shutdown
process to finish. As was, a connection request arriving during a
database recovery or shutdown would cause postmaster abort.
stub) into the rest of the system. Adopt a cleaner approach to preventing
deadlock in concurrent heap_updates: allow RelationGetBufferForTuple to
select any page of the rel, and put the onus on it to lock both buffers
in a consistent order. Remove no-longer-needed isExtend hack from
API of ReleaseAndReadBuffer.
have any newly-dead tuples on them. This is a longstanding deficiency
that prevents VACUUM from compacting a file as much as one would expect.
Change requires fixing repair_frag to not assume that fraged_pages is
a subset of vacuum_pages.
Also make some further cleanups of places that assumed page numbers fit
in int and tuple counts fit in uint32.
do anything yet, but it has the necessary connections to initialization
and so forth. Make some gestures towards allowing number of blocks in
a relation to be BlockNumber, ie, unsigned int, rather than signed int.
(I doubt I got all the places that are sloppy about it, yet.) On the
way, replace the hardwired NLOCKS_PER_XACT fudge factor with a GUC
variable.
IS TRUE, etc, with some degree of verisimilitude. Split out
selectivity support functions from builtins.h into a new header
file selfuncs.h, so as to reduce the number of header files builtins.h
must depend on. Fix a few missing inclusions exposed thereby.
From Joe Conway, with some kibitzing from Tom Lane.
> > secure_ctx changes too. it will be PGC_BACKEND after '-p'.
>
> Oh, okay, I missed that part. Could we see the total state of the
> patch --- ie, a diff against current CVS, not a bunch of deltas?
> I've gotten confused about what's in and what's out.
Ok, here it is. Cleared the ctx comment too - after -p
it will be PGC_BACKEND in any case.
Marko Kreen
a new postmaster child process. This should eliminate problems with
authentication blocking (e.g., ident, SSL init) and also reduce problems
with the accept queue filling up under heavy load.
The option to send elog output to a different file per backend (postgres -o)
has been disabled for now because the initialization would have to happen
in a different order and it's not clear we want to keep this anyway.
tests to return the correct results per SQL9x when given NULL inputs.
Reimplement these tests as well as IS [NOT] NULL to have their own
expression node types, instead of depending on special functions.
From Joe Conway, with a little help from Tom Lane.
SI messages now include the relevant database OID, so that operations
in one database do not cause useless cache flushes in backends attached
to other databases. Declare SI messages properly using a union, to
eliminate the former assumption that Oid is the same size as int or Index.
Rewrite the nearly-unreadable code in inval.c, and document it better.
Arrange for catcache flushes at end of command/transaction to happen before
relcache flushes do --- this avoids loading a new tuple into the catcache
while setting up new relcache entry, only to have it be flushed again
immediately.
Here is Tomified version of my 2 pending patches.
Dropped the set_.._real change as it is not needed.
Desc would be:
* use GUC for settings from cmdline
Marko Kreen
CatalogCacheFlushRelation (formerly called SystemCacheRelationFlushed)
how to distinguish tuples it should flush from those it needn't; this
means a relcache flush event now only removes the catcache entries
it ought to, rather than zapping the caches completely as it used to.
Testing with the regression tests indicates that this considerably
improves the lifespan of catcache entries. Also, rearrange catcache
data structures so that the limit on number of cached tuples applies
globally across all the catcaches, rather than being per-catcache.
It was a little silly to have the same size limit on both, say,
pg_attribute caches and pg_am caches (there being only four possible
rows in the latter...). Doing LRU removal across all the caches
instead of locally in each one should reduce cache reload traffic
in the more heavily used caches and improve the efficiency of
cache memory use.
TopTransactionContext, rather than using Dllist. This simplifies and
speeds up the code, and eliminates a former risk of coredump when
out of memory (since the old code didn't bother to check for malloc
failure). It also moves us one step closer to retiring Dllist...