(WAL logging for this is not done yet, however.) Clean up a number of really
crufty things that are no longer needed now that DROP behaves nicely. Make
temp table mapper do the right things when drop or rename affecting a temp
table is rolled back. Also, remove "relation modified while in use" error
check, in favor of locking tables at first reference and holding that lock
throughout the statement.
from bufmgr - it would be nice to have separate hash in smgr
for node <--> fd mappings, but for the moment it's easy to
add new hash to relcache.
Fixed small bug in xlog.c:ReadRecord.
and DropBuffers. Formerly we cleared the flag for each buffer currently
belonging to the target rel or database, but that's completely wrong!
Must look at BufferTagLastDirtied to see whether the BufferDirtiedByMe
flag is relevant to target rel or not; this is *independent* of the
current contents of the buffer. Vadim spotted this problem, but his
fix was only partially correct...
Hiroshi. ReleaseRelationBuffers now removes rel's buffers from pool,
instead of merely marking them nondirty. The old code would leave valid
buffers for a deleted relation, which didn't cause any known problems
but can't possibly be a good idea. There were several places which called
ReleaseRelationBuffers *and* FlushRelationBuffers, which is now
unnecessary; but there were others that did not. FlushRelationBuffers
no longer emits a warning notice if it finds dirty buffers to flush,
because with the current bufmgr behavior that's not an unexpected
condition. Also, FlushRelationBuffers will flush out all dirty buffers
for the relation regardless of block number. This ensures that
pg_upgrade's expectations are met about tuple on-row status bits being
up-to-date on disk. Lastly, tweak BufTableDelete() to clear the
buffer's tag so that no one can mistake it for being a still-valid
buffer for the page it once held. Formerly, the buffer would not be
found by buffer hashtable searches after BufTableDelete(), but it would
still be thought to belong to its old relation by the routines that
sequentially scan the shared-buffer array. Again I know of no bugs
caused by that, but it still can't be a good idea.
whether to do fsync or not, and if so (which should be seldom) just
do the fsync immediately. This way we need not build data structures
in md.c/fd.c for blind writes.
as a shared dirtybit for each shared buffer. The shared dirtybit still
controls writing the buffer, but the local bit controls whether we need
to fsync the buffer's file. This arrangement fixes a bug that allowed
some required fsyncs to be missed, and should improve performance as well.
For more info see my post of same date on pghackers.
In the event of an elog() while the mode was set to immediate write,
there was no way for it to be set back to the normal delayed write.
The mechanism was a waste of space and cycles anyway, since the only user
was varsup.c, which could perfectly well call FlushBuffer directly.
Now it does just that, and the notion of a write mode is gone.
integers) to be strings instead of 'double'. We convert from string form
to internal representation only after type resolution has determined the
correct type for the constant. This eliminates loss-of-precision worries
and gets rid of the change in behavior seen at 17 digits with the
previous kluge.
it wants to release. This leads to a race condition: does the backend
that's trying to flush the buffer do so before the one that's deleting the
relation does so? Usually no problem, I expect, but on occasion this could
lead to hard-to-reproduce complaints from md.c, especially mdblindwrt.
* Buffer refcount cleanup (per my "progress report" to pghackers, 9/22).
* Add links to backend PROC structs to sinval's array of per-backend info,
and use these links for routines that need to check the state of all
backends (rather than the slow, complicated search of the ShmemIndex
hashtable that was used before). Add databaseOID to PROC structs.
* Use this to implement an interlock that prevents DESTROY DATABASE of
a database containing running backends. (It's a little tricky to prevent
a concurrently-starting backend from getting in there, since the new
backend is not able to lock anything at the time it tries to look up
its database in pg_database. My solution is to recheck that the DB is
OK at the end of InitPostgres. It may not be a 100% solution, but it's
a lot better than no interlock at all...)
* In ALTER TABLE RENAME, flush buffers for the relation before doing the
rename of the physical files, to ensure we don't get failures later from
mdblindwrt().
* Update TRUNCATE patch so that it actually compiles against current
sources :-(.
You should do "make clean all" after pulling these changes.
additional argument specifying the kind of lock to acquire/release (or
'NoLock' to do no lock processing). Ensure that all relations are locked
with some appropriate lock level before being examined --- this ensures
that relevant shared-inval messages have been processed and should prevent
problems caused by concurrent VACUUM. Fix several bugs having to do with
mismatched increment/decrement of relation ref count and mismatched
heap_open/close (which amounts to the same thing). A bogus ref count on
a relation doesn't matter much *unless* a SI Inval message happens to
arrive at the wrong time, which is probably why we got away with this
sloppiness for so long. Repair missing grab of AccessExclusiveLock in
DROP TABLE, ALTER/RENAME TABLE, etc, as noted by Hiroshi.
Recommend 'make clean all' after pulling this update; I modified the
Relation struct layout slightly.
Will post further discussion to pghackers list shortly.
and possibly for other cases too:
DO NOT cache status of transaction in unknown state
(i.e. non-committed and non-aborted ones)
Example:
T1 reads row updated/inserted by running T2 and cache T2 status.
T2 commits.
Now T1 reads a row updated by T2 and with HEAP_XMAX_COMMITTED
in t_infomask (so cached T2 status is not changed).
Now T1 EvalPlanQual gets updated row version without HEAP_XMIN_COMMITTED
-> TransactionIdDidCommit(t_xmin) and TransactionIdDidAbort(t_xmin)
return FALSE and T2 decides that t_xmin is not committed and gets
ERROR above.
It's too late to find more smart way to handle such cases and so
I just changed xact status caching and got rid TransactionIdFlushCache()
from code.
Changed: transam.c, xact.c, lmgr.c and transam.h - last three
just because of TransactionIdFlushCache() is removed.
2. heapam.c:
T1 marked a row for update. T2 waits for T1 commit/abort.
T1 commits. T3 updates the row before T2 locks row page.
Now T2 sees that new row t_xmax is different from xact id (T1)
T2 was waiting for. Old code did Assert here. New one goes to
HeapTupleSatisfiesUpdate. Obvious changes too.
3. Added Assert to vacuum.c
4. bufmgr.c: break
Assert(buf->r_locks == 0 && !buf->ri_lock)
into two Asserts.
2. Much faster btree tuples deletion in the case when first on page
index tuple is deleted (no movement to the left page(s)).
3. Remember blkno of new root page in BTPageOpaque of
left/right siblings when root page is splitted.
Ok. I made patches replacing all of "#if FALSE" or "#if 0" to "#ifdef
NOT_USED" for current. I have tested these patches in that the
postgres binaries are identical.