index pages: when _bt_getbuf asks the FSM for a free index page, it is
possible (and, in some cases, even moderately likely) that the answer
will be the same page that _bt_split is trying to split. _bt_getbuf
already knew that the returned page might not be free, but it wasn't
prepared for the possibility that even trying to lock the page could
be problematic. Fix by doing a conditional rather than unconditional
grab of the page lock.
free'd for every transaction or statement, respectively. This patch
puts these data structures into static memory, thus saving a few CPU
cycles and two malloc calls per transaction or (in isolation level
READ COMMITTED) per query.
Manfred Koizar
least-recently-used strategy from clog.c into slru.c. It doesn't
change any visible behaviour and passes all regression tests plus a
TruncateCLOG test done manually.
Apart from refactoring I made a little change to SlruRecentlyUsed,
formerly ClogRecentlyUsed: It now skips incrementing lru_counts, if
slotno is already the LRU slot, thus saving a few CPU cycles. To make
this work, lru_counts are initialised to 1 in SimpleLruInit.
SimpleLru will be used by pg_subtrans (part of the nested transactions
project), so the main purpose of this patch is to avoid future code
duplication.
Manfred Koizar
Win32 port is now called 'win32' rather than 'win'
add -lwsock32 on Win32
make gethostname() be only used when kerberos4 is enabled
use /port/getopt.c
new /port/opendir.c routines
disable GUC unix_socket_group on Win32
convert some keywords.c symbols to KEYWORD_P to prevent conflict
create new FCNTL_NONBLOCK macro to turn off socket blocking
create new /include/port.h file that has /port prototypes, move
out of c.h
new /include/port/win32_include dir to hold missing include files
work around ERROR being defined in Win32 includes
detected during buffer dump to be labeled with the buffer location.
For example, if a page LSN is clobbered, we now produce something like
ERROR: XLogFlush: request 2C000000/8468EC8 is not satisfied --- flushed only
to 0/8468EF0
CONTEXT: writing block 0 of relation 428946/566240
whereas before there was no convenient way to find out which page had
been trashed.
harmless on signed-char machines but would lead to core dump in the
deadlock detection code if char is unsigned. Amazingly, this bug has
been here since 7.1 and yet wasn't reported till now. Thanks to Robert
Bruccoleri for providing the opportunity to track it down.
page when it's read in, per pghackers discussion around 17-Feb. Add a
GUC variable zero_damaged_pages that causes the response to be a WARNING
followed by zeroing the page, rather than the normal ERROR; this is per
Hiroshi's suggestion that there needs to be a way to get at the data
in the rest of the table.
(materialization into a tuple store) discussed on pgsql-hackers earlier.
I've updated the documentation and the regression tests.
Notes on the implementation:
- I needed to change the tuple store API slightly -- it assumes that it
won't be used to hold data across transaction boundaries, so the temp
files that it uses for on-disk storage are automatically reclaimed at
end-of-transaction. I added a flag to tuplestore_begin_heap() to control
this behavior. Is changing the tuple store API in this fashion OK?
- in order to store executor results in a tuple store, I added a new
CommandDest. This works well for the most part, with one exception: the
current DestFunction API doesn't provide enough information to allow the
Executor to store results into an arbitrary tuple store (where the
particular tuple store to use is chosen by the call site of
ExecutorRun). To workaround this, I've temporarily hacked up a solution
that works, but is not ideal: since the receiveTuple DestFunction is
passed the portal name, we can use that to lookup the Portal data
structure for the cursor and then use that to get at the tuple store the
Portal is using. This unnecessarily ties the Portal code with the
tupleReceiver code, but it works...
The proper fix for this is probably to change the DestFunction API --
Tom suggested passing the full QueryDesc to the receiveTuple function.
In that case, callers of ExecutorRun could "subclass" QueryDesc to add
any additional fields that their particular CommandDest needed to get
access to. This approach would work, but I'd like to think about it for
a little bit longer before deciding which route to go. In the mean time,
the code works fine, so I don't think a fix is urgent.
- (semi-related) I added a NO SCROLL keyword to DECLARE CURSOR, and
adjusted the behavior of SCROLL in accordance with the discussion on
-hackers.
- (unrelated) Cleaned up some SGML markup in sql.sgml, copy.sgml
Neil Conway
at database shutdown, and then load it again at database startup. This
preserves our hard-won knowledge of free space across restarts (given
an orderly shutdown, that is).
Adjustable threshold is gone in favor of keeping track of total requested
page storage and doling out proportional fractions to each relation
(with a minimum amount per relation, and some quantization of the results
to avoid thrashing with small changes in page counts). Provide special-
case code for indexes so as not to waste space storing useless page
free space counts. Restructure internal data storage to be a flat array
instead of list-of-chunks; this may cost a little more work in data
copying when reorganizing, but allows binary search to be used during
lookup_fsm_page_entry().
RelOid_pg_class, and transaction locks XactLockTableId. RelId is renamed
to objId.
- LockObject() and UnlockObject() functions created, and their use
sprinkled throughout the code to do descent locking for domains and
types. They accept lock modes AccessShare and AccessExclusive, as we
only really need a 'read' and 'write' lock at the moment. Most locking
cases are held until the end of the transaction.
This fixes the cases Tom mentioned earlier in regards to locking with
Domains. If the patch is good, I'll work on cleaning up issues with
other database objects that have this problem (most of them).
Rod Taylor
longer works -- IncrHeapAccessStat() didn't actually *do* anything
anymore, so no reason to keep it around AFAICS. I also fixed a
grammatical error in a comment.
Neil Conway
previously determined not to be the last segment of a relation.
This reduces the expected cost to one seek, rather than one seek per
segment. We can get away with this because truncation of a relation
will cause a relcache flush and so the md.c file descriptor will be
closed; when it is re-opened we will re-determine the last segment.
database access outside a transaction; revert bogus performance improvement
in SIBackendInit(); improve comments; add documentation (this part courtesy
Neil Conway).
>
> ... he is now about to write an inlined version that can go into
> s_lock.h . I'll send the new patch later on...
OK, here it comes:
An inlined version of tas(), that works for both, powerpc and
powerpc64. The patch is against 7.3b5 and passes the test suite on
both architectures.
Reinhard Max
between signal handler and enable/disable code, avoid accumulation of
timing error due to trying to maintain remaining-time instead of
absolute-end-time, disable timeout before commit not after.
ProcKill instead, where we still have a PGPROC with which to wait on
LWLocks. This fixes 'can't wait without a PROC structure' failures
occasionally seen during backend shutdown (I'm surprised they weren't
more frequent, actually). Add an Assert() to LWLockAcquire to help
catch any similar mistakes in future. Fix failure to update MyProcPid
for standalone backends and pgstat processes.