mirror of
https://github.com/postgres/postgres.git
synced 2025-11-15 03:41:20 +03:00
Remove useless whitespace at end of lines
This commit is contained in:
@@ -264,7 +264,7 @@ while scanning the buffers. (This is a very substantial improvement in
|
||||
the contention cost of the writer compared to PG 8.0.)
|
||||
|
||||
During a checkpoint, the writer's strategy must be to write every dirty
|
||||
buffer (pinned or not!). We may as well make it start this scan from
|
||||
buffer (pinned or not!). We may as well make it start this scan from
|
||||
NextVictimBuffer, however, so that the first-to-be-written pages are the
|
||||
ones that backends might otherwise have to write for themselves soon.
|
||||
|
||||
|
||||
@@ -84,7 +84,7 @@ backends are concurrently inserting into a relation, contention can be avoided
|
||||
by having them insert into different pages. But it is also desirable to fill
|
||||
up pages in sequential order, to get the benefit of OS prefetching and batched
|
||||
writes. The FSM is responsible for making that happen, and the next slot
|
||||
pointer helps provide the desired behavior.
|
||||
pointer helps provide the desired behavior.
|
||||
|
||||
Higher-level structure
|
||||
----------------------
|
||||
|
||||
@@ -7,7 +7,7 @@ Mon Jul 18 11:09:22 PDT 1988 W.KLAS
|
||||
|
||||
The cache synchronization is done using a message queue. Every
|
||||
backend can register a message which then has to be read by
|
||||
all backends. A message read by all backends is removed from the
|
||||
all backends. A message read by all backends is removed from the
|
||||
queue automatically. If a message has been lost because the buffer
|
||||
was full, all backends that haven't read this message will be
|
||||
told that they have to reset their cache state. This is done
|
||||
|
||||
@@ -27,5 +27,5 @@ s_lock_test: s_lock.c $(top_builddir)/src/port/libpgport.a
|
||||
check: s_lock_test
|
||||
./s_lock_test
|
||||
|
||||
clean distclean maintainer-clean:
|
||||
clean distclean maintainer-clean:
|
||||
rm -f s_lock_test
|
||||
|
||||
@@ -31,7 +31,7 @@ arrival order. There is no timeout.
|
||||
|
||||
* Regular locks (a/k/a heavyweight locks). The regular lock manager
|
||||
supports a variety of lock modes with table-driven semantics, and it has
|
||||
full deadlock detection and automatic release at transaction end.
|
||||
full deadlock detection and automatic release at transaction end.
|
||||
Regular locks should be used for all user-driven lock requests.
|
||||
|
||||
Acquisition of either a spinlock or a lightweight lock causes query
|
||||
@@ -260,7 +260,7 @@ A key design consideration is that we want to make routine operations
|
||||
(lock grant and release) run quickly when there is no deadlock, and
|
||||
avoid the overhead of deadlock handling as much as possible. We do this
|
||||
using an "optimistic waiting" approach: if a process cannot acquire the
|
||||
lock it wants immediately, it goes to sleep without any deadlock check.
|
||||
lock it wants immediately, it goes to sleep without any deadlock check.
|
||||
But it also sets a delay timer, with a delay of DeadlockTimeout
|
||||
milliseconds (typically set to one second). If the delay expires before
|
||||
the process is granted the lock it wants, it runs the deadlock
|
||||
|
||||
Reference in New Issue
Block a user