mirror of
https://github.com/postgres/postgres.git
synced 2026-01-05 23:38:41 +03:00
Re-run pgindent, fixing a problem where comment lines after a blank
comment line where output as too long, and update typedefs for /lib directory. Also fix case where identifiers were used as variable names in the backend, but as typedefs in ecpg (favor the backend for indenting). Backpatch to 8.1.X.
This commit is contained in:
@@ -9,7 +9,7 @@
|
||||
*
|
||||
*
|
||||
* IDENTIFICATION
|
||||
* $PostgreSQL: pgsql/src/backend/access/nbtree/nbtpage.c,v 1.89 2005/11/06 19:29:00 tgl Exp $
|
||||
* $PostgreSQL: pgsql/src/backend/access/nbtree/nbtpage.c,v 1.90 2005/11/22 18:17:06 momjian Exp $
|
||||
*
|
||||
* NOTES
|
||||
* Postgres btree pages look like ordinary relation pages. The opaque
|
||||
@@ -412,16 +412,17 @@ _bt_checkpage(Relation rel, Buffer buf)
|
||||
Page page = BufferGetPage(buf);
|
||||
|
||||
/*
|
||||
* ReadBuffer verifies that every newly-read page passes PageHeaderIsValid,
|
||||
* which means it either contains a reasonably sane page header or is
|
||||
* all-zero. We have to defend against the all-zero case, however.
|
||||
* ReadBuffer verifies that every newly-read page passes
|
||||
* PageHeaderIsValid, which means it either contains a reasonably sane
|
||||
* page header or is all-zero. We have to defend against the all-zero
|
||||
* case, however.
|
||||
*/
|
||||
if (PageIsNew(page))
|
||||
ereport(ERROR,
|
||||
(errcode(ERRCODE_INDEX_CORRUPTED),
|
||||
errmsg("index \"%s\" contains unexpected zero page at block %u",
|
||||
RelationGetRelationName(rel),
|
||||
BufferGetBlockNumber(buf)),
|
||||
errmsg("index \"%s\" contains unexpected zero page at block %u",
|
||||
RelationGetRelationName(rel),
|
||||
BufferGetBlockNumber(buf)),
|
||||
errhint("Please REINDEX it.")));
|
||||
|
||||
/*
|
||||
@@ -440,7 +441,7 @@ _bt_checkpage(Relation rel, Buffer buf)
|
||||
/*
|
||||
* _bt_getbuf() -- Get a buffer by block number for read or write.
|
||||
*
|
||||
* blkno == P_NEW means to get an unallocated index page. The page
|
||||
* blkno == P_NEW means to get an unallocated index page. The page
|
||||
* will be initialized before returning it.
|
||||
*
|
||||
* When this routine returns, the appropriate lock is set on the
|
||||
@@ -475,21 +476,21 @@ _bt_getbuf(Relation rel, BlockNumber blkno, int access)
|
||||
* have been re-used between the time the last VACUUM scanned it and
|
||||
* the time the VACUUM made its FSM updates.)
|
||||
*
|
||||
* In fact, it's worse than that: we can't even assume that it's safe to
|
||||
* take a lock on the reported page. If somebody else has a lock on
|
||||
* it, or even worse our own caller does, we could deadlock. (The
|
||||
* In fact, it's worse than that: we can't even assume that it's safe
|
||||
* to take a lock on the reported page. If somebody else has a lock
|
||||
* on it, or even worse our own caller does, we could deadlock. (The
|
||||
* own-caller scenario is actually not improbable. Consider an index
|
||||
* on a serial or timestamp column. Nearly all splits will be at the
|
||||
* rightmost page, so it's entirely likely that _bt_split will call us
|
||||
* while holding a lock on the page most recently acquired from FSM.
|
||||
* A VACUUM running concurrently with the previous split could well
|
||||
* have placed that page back in FSM.)
|
||||
* while holding a lock on the page most recently acquired from FSM. A
|
||||
* VACUUM running concurrently with the previous split could well have
|
||||
* placed that page back in FSM.)
|
||||
*
|
||||
* To get around that, we ask for only a conditional lock on the reported
|
||||
* page. If we fail, then someone else is using the page, and we may
|
||||
* reasonably assume it's not free. (If we happen to be wrong, the
|
||||
* worst consequence is the page will be lost to use till the next
|
||||
* VACUUM, which is no big problem.)
|
||||
* To get around that, we ask for only a conditional lock on the
|
||||
* reported page. If we fail, then someone else is using the page,
|
||||
* and we may reasonably assume it's not free. (If we happen to be
|
||||
* wrong, the worst consequence is the page will be lost to use till
|
||||
* the next VACUUM, which is no big problem.)
|
||||
*/
|
||||
for (;;)
|
||||
{
|
||||
@@ -839,12 +840,12 @@ _bt_pagedel(Relation rel, Buffer buf, bool vacuum_full)
|
||||
* We have to lock the pages we need to modify in the standard order:
|
||||
* moving right, then up. Else we will deadlock against other writers.
|
||||
*
|
||||
* So, we need to find and write-lock the current left sibling of the target
|
||||
* page. The sibling that was current a moment ago could have split, so
|
||||
* we may have to move right. This search could fail if either the
|
||||
* sibling or the target page was deleted by someone else meanwhile; if
|
||||
* so, give up. (Right now, that should never happen, since page deletion
|
||||
* is only done in VACUUM and there shouldn't be multiple VACUUMs
|
||||
* So, we need to find and write-lock the current left sibling of the
|
||||
* target page. The sibling that was current a moment ago could have
|
||||
* split, so we may have to move right. This search could fail if either
|
||||
* the sibling or the target page was deleted by someone else meanwhile;
|
||||
* if so, give up. (Right now, that should never happen, since page
|
||||
* deletion is only done in VACUUM and there shouldn't be multiple VACUUMs
|
||||
* concurrently on the same table.)
|
||||
*/
|
||||
if (leftsib != P_NONE)
|
||||
|
||||
Reference in New Issue
Block a user