appropriate pin-count manipulation, and instead use ReleaseAndReadBuffer.
Make use of the fact that the passed-in buffer (if there is one) must
be pinned to avoid grabbing the bufmgr spinlock when we are able to
return this same buffer. Eliminate unnecessary 'previous tuple' and
'next tuple' fields of HeapScanDesc and IndexScanDesc, thereby removing
a whole lot of bookkeeping from heap_getnext() and related routines.
when we need to move to a new page; as long as we can insert the new
tuple on the same page as before, we only need LockBuffer and not the
expensive stuff. Also, twiddle bufmgr interfaces to avoid redundant
lseeks in RelationGetBufferForTuple and BufferAlloc. Successive inserts
now require one lseek per page added, rather than one per tuple with
several additional ones at each page boundary as happened before.
Lock contention when multiple backends are inserting in same table
is also greatly reduced.
not being consulted anywhere, so remove it and remove the _mdnblocks()
calls that were used to set it. Change smgrextend interface to pass in
the target block number (ie, current file length) --- the caller always
knows this already, having already done smgrnblocks(), so it's silly to
do it over again inside mdextend. Net result: extension of a file now
takes one lseek(SEEK_END) and a write(), not three lseeks and a write.
*before* acquiring shlock on buffer context. This way we should be
protected against conflicts with FlushRelationBuffers.
(Seems we never do excl lock and then StartBufferIO for the same
buffer, so there should be no deadlock here, - but we'd better
check this very soon).
waste of cycles on single-CPU machines, and of dubious utility on multi-CPU
machines too.
Tweak s_lock_stuck so that caller can specify timeout interval, and
increase interval before declaring stuck spinlock for buffer locks and XLOG
locks.
On systems that have fdatasync(), use that rather than fsync() to sync WAL
log writes. Ensure that WAL file is entirely allocated during XLogFileInit.
are treated more like 'cancel' interrupts: the signal handler sets a
flag that is examined at well-defined spots, rather than trying to cope
with an interrupt that might happen anywhere. See pghackers discussion
of 1/12/01.
are now critical sections, so as to ensure die() won't interrupt us while
we are munging shared-memory data structures. Avoid insecure intermediate
states in some code that proc_exit will call, like palloc/pfree. Rename
START/END_CRIT_CODE to START/END_CRIT_SECTION, since that seems to be
what people tend to call them anyway, and make them be called with () like
a function call, in hopes of not confusing pg_indent.
I doubt that this is sufficient to make SIGTERM safe anywhere; there's
just too much code that could get invoked during proc_exit().
assume that TAS() will always succeed the first time, even if the lock
is known to be free. Also, make sure that code will eventually time out
and report a stuck spinlock, rather than looping forever. Small cleanups
in s_lock.h, too.
included by everything that includes bufmgr.h --- it's supposed to be
internals, after all, not part of the API! This fixes the conflict
against FreeBSD headers reported by Rosenman, by making it unnecessary
for s_lock.h to be included by plperl.c.
IPC key assignment will now work correctly even when multiple postmasters
are using same logical port number (which is possible given -k switch).
There is only one shared-mem segment per postmaster now, not 3.
Rip out broken code for non-TAS case in bufmgr and xlog, substitute a
complete S_LOCK emulation using semaphores in spin.c. TAS and non-TAS
logic is now exactly the same.
When deadlock is detected, "Deadlock detected" is now the elog(ERROR)
message, rather than a NOTICE that comes out before an unhelpful ERROR.
(WAL logging for this is not done yet, however.) Clean up a number of really
crufty things that are no longer needed now that DROP behaves nicely. Make
temp table mapper do the right things when drop or rename affecting a temp
table is rolled back. Also, remove "relation modified while in use" error
check, in favor of locking tables at first reference and holding that lock
throughout the statement.
from bufmgr - it would be nice to have separate hash in smgr
for node <--> fd mappings, but for the moment it's easy to
add new hash to relcache.
Fixed small bug in xlog.c:ReadRecord.
and DropBuffers. Formerly we cleared the flag for each buffer currently
belonging to the target rel or database, but that's completely wrong!
Must look at BufferTagLastDirtied to see whether the BufferDirtiedByMe
flag is relevant to target rel or not; this is *independent* of the
current contents of the buffer. Vadim spotted this problem, but his
fix was only partially correct...
Hiroshi. ReleaseRelationBuffers now removes rel's buffers from pool,
instead of merely marking them nondirty. The old code would leave valid
buffers for a deleted relation, which didn't cause any known problems
but can't possibly be a good idea. There were several places which called
ReleaseRelationBuffers *and* FlushRelationBuffers, which is now
unnecessary; but there were others that did not. FlushRelationBuffers
no longer emits a warning notice if it finds dirty buffers to flush,
because with the current bufmgr behavior that's not an unexpected
condition. Also, FlushRelationBuffers will flush out all dirty buffers
for the relation regardless of block number. This ensures that
pg_upgrade's expectations are met about tuple on-row status bits being
up-to-date on disk. Lastly, tweak BufTableDelete() to clear the
buffer's tag so that no one can mistake it for being a still-valid
buffer for the page it once held. Formerly, the buffer would not be
found by buffer hashtable searches after BufTableDelete(), but it would
still be thought to belong to its old relation by the routines that
sequentially scan the shared-buffer array. Again I know of no bugs
caused by that, but it still can't be a good idea.
whether to do fsync or not, and if so (which should be seldom) just
do the fsync immediately. This way we need not build data structures
in md.c/fd.c for blind writes.
as a shared dirtybit for each shared buffer. The shared dirtybit still
controls writing the buffer, but the local bit controls whether we need
to fsync the buffer's file. This arrangement fixes a bug that allowed
some required fsyncs to be missed, and should improve performance as well.
For more info see my post of same date on pghackers.
In the event of an elog() while the mode was set to immediate write,
there was no way for it to be set back to the normal delayed write.
The mechanism was a waste of space and cycles anyway, since the only user
was varsup.c, which could perfectly well call FlushBuffer directly.
Now it does just that, and the notion of a write mode is gone.
integers) to be strings instead of 'double'. We convert from string form
to internal representation only after type resolution has determined the
correct type for the constant. This eliminates loss-of-precision worries
and gets rid of the change in behavior seen at 17 digits with the
previous kluge.
it wants to release. This leads to a race condition: does the backend
that's trying to flush the buffer do so before the one that's deleting the
relation does so? Usually no problem, I expect, but on occasion this could
lead to hard-to-reproduce complaints from md.c, especially mdblindwrt.