1
0
mirror of https://github.com/postgres/postgres.git synced 2025-07-07 00:36:50 +03:00

Add SP-GiST (space-partitioned GiST) index access method.

SP-GiST is comparable to GiST in flexibility, but supports non-balanced
partitioned search structures rather than balanced trees.  As described at
PGCon 2011, this new indexing structure can beat GiST in both index build
time and query speed for search problems that it is well matched to.

There are a number of areas that could still use improvement, but at this
point the code seems committable.

Teodor Sigaev and Oleg Bartunov, with considerable revisions by Tom Lane
This commit is contained in:
Tom Lane
2011-12-17 16:41:16 -05:00
parent 19fc0fe3ae
commit 8daeb5ddd6
46 changed files with 10395 additions and 101 deletions

View File

@ -0,0 +1,19 @@
#-------------------------------------------------------------------------
#
# Makefile--
# Makefile for access/spgist
#
# IDENTIFICATION
# src/backend/access/spgist/Makefile
#
#-------------------------------------------------------------------------
subdir = src/backend/access/spgist
top_builddir = ../../../..
include $(top_builddir)/src/Makefile.global
OBJS = spgutils.o spginsert.o spgscan.o spgvacuum.o \
spgdoinsert.o spgxlog.o \
spgtextproc.o spgquadtreeproc.o spgkdtreeproc.o
include $(top_srcdir)/src/backend/common.mk

View File

@ -0,0 +1,316 @@
src/backend/access/spgist/README
SP-GiST is an abbreviation of space-partitioned GiST. It provides a
generalized infrastructure for implementing space-partitioned data
structures, such as quadtrees, k-d trees, and suffix trees (tries). When
implemented in main memory, these structures are usually designed as a set of
dynamically-allocated nodes linked by pointers. This is not suitable for
direct storing on disk, since the chains of pointers can be rather long and
require too many disk accesses. In contrast, disk based data structures
should have a high fanout to minimize I/O. The challenge is to map tree
nodes to disk pages in such a way that the search algorithm accesses only a
few disk pages, even if it traverses many nodes.
COMMON STRUCTURE DESCRIPTION
Logically, an SP-GiST tree is a set of tuples, each of which can be either
an inner or leaf tuple. Each inner tuple contains "nodes", which are
(label,pointer) pairs, where the pointer (ItemPointerData) is a pointer to
another inner tuple or to the head of a list of leaf tuples. Inner tuples
can have different numbers of nodes (children). Branches can be of different
depth (actually, there is no control or code to support balancing), which
means that the tree is non-balanced. However, leaf and inner tuples cannot
be intermixed at the same level: a downlink from a node of an inner tuple
leads either to one inner tuple, or to a list of leaf tuples.
The SP-GiST core requires that inner and leaf tuples fit on a single index
page, and even more stringently that the list of leaf tuples reached from a
single inner-tuple node all be stored on the same index page. (Restricting
such lists to not cross pages reduces seeks, and allows the list links to be
stored as simple 2-byte OffsetNumbers.) SP-GiST index opclasses should
therefore ensure that not too many nodes can be needed in one inner tuple,
and that inner-tuple prefixes and leaf-node datum values not be too large.
Inner and leaf tuples are stored separately: the former are stored only on
"inner" pages, the latter only on "leaf" pages. Also, there are special
restrictions on the root page. Early in an index's life, when there is only
one page's worth of data, the root page contains an unorganized set of leaf
tuples. After the first page split has occurred, the root is required to
contain exactly one inner tuple.
When the search traversal algorithm reaches an inner tuple, it chooses a set
of nodes to continue tree traverse in depth. If it reaches a leaf page it
scans a list of leaf tuples to find the ones that match the query.
The insertion algorithm descends the tree similarly, except it must choose
just one node to descend to from each inner tuple. Insertion might also have
to modify the inner tuple before it can descend: it could add a new node, or
it could "split" the tuple to obtain a less-specific prefix that can match
the value to be inserted. If it's necessary to append a new leaf tuple to a
list and there is no free space on page, then SP-GiST creates a new inner
tuple and distributes leaf tuples into a set of lists on, perhaps, several
pages.
Inner tuple consists of:
optional prefix value - all successors must be consistent with it.
Example:
suffix tree - prefix value is a common prefix string
quad tree - centroid
k-d tree - one coordinate
list of nodes, where node is a (label, pointer) pair.
Example of a label: a single character for suffix tree
Leaf tuple consists of:
a leaf value
Example:
suffix tree - the rest of string (postfix)
quad and k-d tree - the point itself
ItemPointer to the heap
INSERTION ALGORITHM
Insertion algorithm is designed to keep the tree in a consistent state at
any moment. Here is a simplified insertion algorithm specification
(numbers refer to notes below):
Start with the first tuple on the root page (1)
loop:
if (page is leaf) then
if (enough space)
insert on page and exit (5)
else (7)
call PickSplitFn() (2)
end if
else
switch (chooseFn())
case MatchNode - descend through selected node
case AddNode - add node and then retry chooseFn (3, 6)
case SplitTuple - split inner tuple to prefix and postfix, then
retry chooseFn with the prefix tuple (4, 6)
end if
Notes:
(1) Initially, we just dump leaf tuples into the root page until it is full;
then we split it. Once the root is not a leaf page, it can have only one
inner tuple, so as to keep the amount of free space on the root as large as
possible. Both of these rules are meant to postpone doing PickSplit on the
root for as long as possible, so that the topmost partitioning of the search
space is as good as we can easily make it.
(2) Current implementation allows to do picksplit and insert a new leaf tuple
in one operation, if the new list of leaf tuples fits on one page. It's
always possible for trees with small nodes like quad tree or k-d tree, but
suffix trees may require another picksplit.
(3) Addition of node must keep size of inner tuple small enough to fit on a
page. After addition, inner tuple could become too large to be stored on
current page because of other tuples on page. In this case it will be moved
to another inner page (see notes about page management). When moving tuple to
another page, we can't change the numbers of other tuples on the page, else
we'd make downlink pointers to them invalid. To prevent that, SP-GiST leaves
a "placeholder" tuple, which can be reused later whenever another tuple is
added to the page. See also Concurrency and Vacuum sections below. Right now
only suffix trees could add a node to the tuple; quad trees and k-d trees
make all possible nodes at once in PickSplitFn() call.
(4) Prefix value could only partially match a new value, so the SplitTuple
action allows breaking the current tree branch into upper and lower sections.
Another way to say it is that we can split the current inner tuple into
"prefix" and "postfix" parts, where the prefix part is able to match the
incoming new value. Consider example of insertion into a suffix tree. We use
the following notation, where tuple's id is just for discussion (no such id
is actually stored):
inner tuple: {tuple id}(prefix string)[ comma separated list of node labels ]
leaf tuple: {tuple id}<value>
Suppose we need to insert string 'www.gogo.com' into inner tuple
{1}(www.google.com/)[a, i]
The string does not match the prefix so we cannot descend. We must
split the inner tuple into two tuples:
{2}(www.go)[o] - prefix tuple
|
{3}(gle.com/)[a,i] - postfix tuple
On the next iteration of loop we find that 'www.gogo.com' matches the
prefix, but not any node label, so we add a node [g] to tuple {2}:
NIL (no child exists yet)
|
{2}(www.go)[o, g]
|
{3}(gle.com/)[a,i]
Now we can descend through the [g] node, which will cause us to update
the target string to just 'o.com'. Finally, we'll insert a leaf tuple
bearing that string:
{4}<o.com>
|
{2}(www.go)[o, g]
|
{3}(gle.com/)[a,i]
As we can see, the original tuple's node array moves to postfix tuple without
any changes. Note also that SP-GiST core assumes that prefix tuple is not
larger than old inner tuple. That allows us to store prefix tuple directly
in place of old inner tuple. SP-GiST core will try to store postfix tuple on
the same page if possible, but will use another page if there is not enough
free space (see notes 5 and 6). Currently, quad and k-d trees don't use this
feature, because they have no concept of a prefix being "inconsistent" with
any new value. They grow their depth only by PickSplitFn() call.
(5) If pointer from node of parent is a NIL pointer, algorithm chooses a leaf
page to store on. At first, it tries to use the last-used leaf page with the
largest free space (which we track in each backend) to better utilize disk
space. If that's not large enough, then the algorithm allocates a new page.
(6) Management of inner pages is very similar to management of leaf pages,
described in (5).
(7) Actually, current implementation can move the whole list of leaf tuples
and a new tuple to another page, if the list is short enough. This improves
space utilization, but doesn't change the basis of the algorithm.
CONCURRENCY
While descending the tree, the insertion algorithm holds exclusive lock on
two tree levels at a time, ie both parent and child pages (parent and child
pages can be the same, see notes above). There is a possibility of deadlock
between two insertions if there are cross-referenced pages in different
branches. That is, if inner tuple on page M has a child on page N while
an inner tuple from another branch is on page N and has a child on page M,
then two insertions descending the two branches could deadlock. To prevent
deadlocks we introduce a concept of "triple parity" of pages: if inner tuple
is on page with BlockNumber N, then its child tuples should be placed on the
same page, or else on a page with BlockNumber M where (N+1) mod 3 == M mod 3.
This rule guarantees that tuples on page M will have no children on page N,
since (M+1) mod 3 != N mod 3.
Insertion may also need to take locks on an additional inner and/or leaf page
to add tuples of the right type(s), when there's not enough room on the pages
it descended through. However, we don't care exactly which such page we add
to, so deadlocks can be avoided by conditionally locking the additional
buffers: if we fail to get lock on an additional page, just try another one.
Search traversal algorithm is rather traditional. At each non-leaf level, it
share-locks the page, identifies which node(s) in the current inner tuple
need to be visited, and puts those addresses on a stack of pages to examine
later. It then releases lock on the current buffer before visiting the next
stack item. So only one page is locked at a time, and no deadlock is
possible. But instead, we have to worry about race conditions: by the time
we arrive at a pointed-to page, a concurrent insertion could have replaced
the target inner tuple (or leaf tuple chain) with data placed elsewhere.
To handle that, whenever the insertion algorithm changes a nonempty downlink
in an inner tuple, it places a "redirect tuple" in place of the lower-level
inner tuple or leaf-tuple chain head that the link formerly led to. Scans
(though not insertions) must be prepared to honor such redirects. Only a
scan that had already visited the parent level could possibly reach such a
redirect tuple, so we can remove redirects once all active transactions have
been flushed out of the system.
DEAD TUPLES
Tuples on leaf pages can be in one of four states:
SPGIST_LIVE: normal, live pointer to a heap tuple.
SPGIST_REDIRECT: placeholder that contains a link to another place in the
index. When a chain of leaf tuples has to be moved to another page, a
redirect tuple is inserted in place of the chain's head tuple. The parent
inner tuple's downlink is updated when this happens, but concurrent scans
might be "in flight" from the parent page to the child page (since they
release lock on the parent page before attempting to lock the child).
The redirect pointer serves to tell such a scan where to go. A redirect
pointer is only needed for as long as such concurrent scans could be in
progress. Eventually, it's converted into a PLACEHOLDER dead tuple by
VACUUM, and is then a candidate for replacement. Searches that find such
a tuple (which should never be part of a chain) should immediately proceed
to the other place, forgetting about the redirect tuple. Insertions that
reach such a tuple should raise error, since a valid downlink should never
point to such a tuple.
SPGIST_DEAD: tuple is dead, but it cannot be removed or moved to a
different offset on the page because there is a link leading to it from
some inner tuple elsewhere in the index. (Such a tuple is never part of a
chain, since we don't need one unless there is nothing live left in its
chain.) Searches should ignore such entries. If an insertion action
arrives at such a tuple, it should either replace it in-place (if there's
room on the page to hold the desired new leaf tuple) or replace it with a
redirection pointer to wherever it puts the new leaf tuple.
SPGIST_PLACEHOLDER: tuple is dead, and there are known to be no links to
it from elsewhere. When a live tuple is deleted or moved away, and not
replaced by a redirect pointer, it is replaced by a placeholder to keep
the offsets of later tuples on the same page from changing. Placeholders
can be freely replaced when adding a new tuple to the page, and also
VACUUM will delete any that are at the end of the range of valid tuple
offsets. Both searches and insertions should complain if a link from
elsewhere leads them to a placeholder tuple.
When the root page is also a leaf, all its tuple should be in LIVE state;
there's no need for the others since there are no links and no need to
preserve offset numbers.
Tuples on inner pages can be in LIVE, REDIRECT, or PLACEHOLDER states.
The REDIRECT state has the same function as on leaf pages, to send
concurrent searches to the place where they need to go after an inner
tuple is moved to another page. Expired REDIRECT pointers are converted
to PLACEHOLDER status by VACUUM, and are then candidates for replacement.
DEAD state is not currently possible, since VACUUM does not attempt to
remove unused inner tuples.
VACUUM
VACUUM (or more precisely, spgbulkdelete) performs a single sequential scan
over the entire index. On both leaf and inner pages, we can convert old
REDIRECT tuples into PLACEHOLDER status, and then remove any PLACEHOLDERs
that are at the end of the page (since they aren't needed to preserve the
offsets of any live tuples). On leaf pages, we scan for tuples that need
to be deleted because their heap TIDs match a vacuum target TID.
If we find a deletable tuple that is not at the head of its chain, we
can simply replace it with a PLACEHOLDER, updating the chain links to
remove it from the chain. If it is at the head of its chain, but there's
at least one live tuple remaining in the chain, we move that live tuple
to the head tuple's offset, replacing it with a PLACEHOLDER to preserve
the offsets of other tuples. This keeps the parent inner tuple's downlink
valid. If we find ourselves deleting all live tuples in a chain, we
replace the head tuple with a DEAD tuple and the rest with PLACEHOLDERS.
The parent inner tuple's downlink thus points to the DEAD tuple, and the
rules explained in the previous section keep everything working.
VACUUM doesn't know a-priori which tuples are heads of their chains, but
it can easily figure that out by constructing a predecessor array that's
the reverse map of the nextOffset links (ie, when we see tuple x links to
tuple y, we set predecessor[y] = x). Then head tuples are the ones with
no predecessor.
spgbulkdelete also updates the index's free space map.
Currently, spgvacuumcleanup has nothing to do if spgbulkdelete was
performed; otherwise, it does an spgbulkdelete scan with an empty target
list, so as to clean up redirections and placeholders, update the free
space map, and gather statistics.
LAST USED PAGE MANAGEMENT
List of last used pages contains four pages - a leaf page and three inner
pages, one from each "triple parity" group. This list is stored between
calls on the index meta page, but updates are never WAL-logged to decrease
WAL traffic. Incorrect data on meta page isn't critical, because we could
allocate a new page at any moment.
AUTHORS
Teodor Sigaev <teodor@sigaev.ru>
Oleg Bartunov <oleg@sai.msu.su>

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,219 @@
/*-------------------------------------------------------------------------
*
* spginsert.c
* Externally visible index creation/insertion routines
*
* All the actual insertion logic is in spgdoinsert.c.
*
* Portions Copyright (c) 1996-2011, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
* IDENTIFICATION
* src/backend/access/spgist/spginsert.c
*
*-------------------------------------------------------------------------
*/
#include "postgres.h"
#include "access/genam.h"
#include "access/spgist_private.h"
#include "catalog/index.h"
#include "miscadmin.h"
#include "storage/bufmgr.h"
#include "storage/smgr.h"
#include "utils/memutils.h"
typedef struct
{
SpGistState spgstate; /* SPGiST's working state */
MemoryContext tmpCtx; /* per-tuple temporary context */
} SpGistBuildState;
/* Callback to process one heap tuple during IndexBuildHeapScan */
static void
spgistBuildCallback(Relation index, HeapTuple htup, Datum *values,
bool *isnull, bool tupleIsAlive, void *state)
{
SpGistBuildState *buildstate = (SpGistBuildState *) state;
/* SPGiST doesn't index nulls */
if (*isnull == false)
{
/* Work in temp context, and reset it after each tuple */
MemoryContext oldCtx = MemoryContextSwitchTo(buildstate->tmpCtx);
spgdoinsert(index, &buildstate->spgstate, &htup->t_self, *values);
MemoryContextSwitchTo(oldCtx);
MemoryContextReset(buildstate->tmpCtx);
}
}
/*
* Build an SP-GiST index.
*/
Datum
spgbuild(PG_FUNCTION_ARGS)
{
Relation heap = (Relation) PG_GETARG_POINTER(0);
Relation index = (Relation) PG_GETARG_POINTER(1);
IndexInfo *indexInfo = (IndexInfo *) PG_GETARG_POINTER(2);
IndexBuildResult *result;
double reltuples;
SpGistBuildState buildstate;
Buffer metabuffer,
rootbuffer;
if (RelationGetNumberOfBlocks(index) != 0)
elog(ERROR, "index \"%s\" already contains data",
RelationGetRelationName(index));
/*
* Initialize the meta page and root page
*/
metabuffer = SpGistNewBuffer(index);
rootbuffer = SpGistNewBuffer(index);
Assert(BufferGetBlockNumber(metabuffer) == SPGIST_METAPAGE_BLKNO);
Assert(BufferGetBlockNumber(rootbuffer) == SPGIST_HEAD_BLKNO);
START_CRIT_SECTION();
SpGistInitMetapage(BufferGetPage(metabuffer));
MarkBufferDirty(metabuffer);
SpGistInitBuffer(rootbuffer, SPGIST_LEAF);
MarkBufferDirty(rootbuffer);
if (RelationNeedsWAL(index))
{
XLogRecPtr recptr;
XLogRecData rdata;
/* WAL data is just the relfilenode */
rdata.data = (char *) &(index->rd_node);
rdata.len = sizeof(RelFileNode);
rdata.buffer = InvalidBuffer;
rdata.next = NULL;
recptr = XLogInsert(RM_SPGIST_ID, XLOG_SPGIST_CREATE_INDEX, &rdata);
PageSetLSN(BufferGetPage(metabuffer), recptr);
PageSetTLI(BufferGetPage(metabuffer), ThisTimeLineID);
PageSetLSN(BufferGetPage(rootbuffer), recptr);
PageSetTLI(BufferGetPage(rootbuffer), ThisTimeLineID);
}
END_CRIT_SECTION();
UnlockReleaseBuffer(metabuffer);
UnlockReleaseBuffer(rootbuffer);
/*
* Now insert all the heap data into the index
*/
initSpGistState(&buildstate.spgstate, index);
buildstate.spgstate.isBuild = true;
buildstate.tmpCtx = AllocSetContextCreate(CurrentMemoryContext,
"SP-GiST build temporary context",
ALLOCSET_DEFAULT_MINSIZE,
ALLOCSET_DEFAULT_INITSIZE,
ALLOCSET_DEFAULT_MAXSIZE);
reltuples = IndexBuildHeapScan(heap, index, indexInfo, true,
spgistBuildCallback, (void *) &buildstate);
MemoryContextDelete(buildstate.tmpCtx);
SpGistUpdateMetaPage(index);
result = (IndexBuildResult *) palloc0(sizeof(IndexBuildResult));
result->heap_tuples = result->index_tuples = reltuples;
PG_RETURN_POINTER(result);
}
/*
* Build an empty SPGiST index in the initialization fork
*/
Datum
spgbuildempty(PG_FUNCTION_ARGS)
{
Relation index = (Relation) PG_GETARG_POINTER(0);
Page page;
/* Construct metapage. */
page = (Page) palloc(BLCKSZ);
SpGistInitMetapage(page);
/* Write the page. If archiving/streaming, XLOG it. */
smgrwrite(index->rd_smgr, INIT_FORKNUM, SPGIST_METAPAGE_BLKNO,
(char *) page, true);
if (XLogIsNeeded())
log_newpage(&index->rd_smgr->smgr_rnode.node, INIT_FORKNUM,
SPGIST_METAPAGE_BLKNO, page);
/* Likewise for the root page. */
SpGistInitPage(page, SPGIST_LEAF);
smgrwrite(index->rd_smgr, INIT_FORKNUM, SPGIST_HEAD_BLKNO,
(char *) page, true);
if (XLogIsNeeded())
log_newpage(&index->rd_smgr->smgr_rnode.node, INIT_FORKNUM,
SPGIST_HEAD_BLKNO, page);
/*
* An immediate sync is required even if we xlog'd the pages, because the
* writes did not go through shared buffers and therefore a concurrent
* checkpoint may have moved the redo pointer past our xlog record.
*/
smgrimmedsync(index->rd_smgr, INIT_FORKNUM);
PG_RETURN_VOID();
}
/*
* Insert one new tuple into an SPGiST index.
*/
Datum
spginsert(PG_FUNCTION_ARGS)
{
Relation index = (Relation) PG_GETARG_POINTER(0);
Datum *values = (Datum *) PG_GETARG_POINTER(1);
bool *isnull = (bool *) PG_GETARG_POINTER(2);
ItemPointer ht_ctid = (ItemPointer) PG_GETARG_POINTER(3);
#ifdef NOT_USED
Relation heapRel = (Relation) PG_GETARG_POINTER(4);
IndexUniqueCheck checkUnique = (IndexUniqueCheck) PG_GETARG_INT32(5);
#endif
SpGistState spgstate;
MemoryContext oldCtx;
MemoryContext insertCtx;
/* SPGiST doesn't index nulls */
if (*isnull)
PG_RETURN_BOOL(false);
insertCtx = AllocSetContextCreate(CurrentMemoryContext,
"SP-GiST insert temporary context",
ALLOCSET_DEFAULT_MINSIZE,
ALLOCSET_DEFAULT_INITSIZE,
ALLOCSET_DEFAULT_MAXSIZE);
oldCtx = MemoryContextSwitchTo(insertCtx);
initSpGistState(&spgstate, index);
spgdoinsert(index, &spgstate, ht_ctid, *values);
SpGistUpdateMetaPage(index);
MemoryContextSwitchTo(oldCtx);
MemoryContextDelete(insertCtx);
/* return false since we've not done any unique check */
PG_RETURN_BOOL(false);
}

View File

@ -0,0 +1,298 @@
/*-------------------------------------------------------------------------
*
* spgkdtreeproc.c
* implementation of k-d tree over points for SP-GiST
*
*
* Portions Copyright (c) 1996-2011, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
* IDENTIFICATION
* src/backend/access/spgist/spgkdtreeproc.c
*
*-------------------------------------------------------------------------
*/
#include "postgres.h"
#include "access/gist.h" /* for RTree strategy numbers */
#include "access/spgist.h"
#include "catalog/pg_type.h"
#include "utils/builtins.h"
#include "utils/geo_decls.h"
Datum
spg_kd_config(PG_FUNCTION_ARGS)
{
/* spgConfigIn *cfgin = (spgConfigIn *) PG_GETARG_POINTER(0); */
spgConfigOut *cfg = (spgConfigOut *) PG_GETARG_POINTER(1);
cfg->prefixType = FLOAT8OID;
cfg->labelType = VOIDOID; /* we don't need node labels */
cfg->longValuesOK = false;
PG_RETURN_VOID();
}
static int
getSide(double coord, bool isX, Point *tst)
{
double tstcoord = (isX) ? tst->x : tst->y;
if (coord == tstcoord)
return 0;
else if (coord > tstcoord)
return 1;
else
return -1;
}
Datum
spg_kd_choose(PG_FUNCTION_ARGS)
{
spgChooseIn *in = (spgChooseIn *) PG_GETARG_POINTER(0);
spgChooseOut *out = (spgChooseOut *) PG_GETARG_POINTER(1);
Point *inPoint = DatumGetPointP(in->datum);
double coord;
if (in->allTheSame)
elog(ERROR, "allTheSame should not occur for k-d trees");
Assert(in->hasPrefix);
coord = DatumGetFloat8(in->prefixDatum);
Assert(in->nNodes == 2);
out->resultType = spgMatchNode;
out->result.matchNode.nodeN =
(getSide(coord, in->level % 2, inPoint) > 0) ? 0 : 1;
out->result.matchNode.levelAdd = 1;
out->result.matchNode.restDatum = PointPGetDatum(inPoint);
PG_RETURN_VOID();
}
typedef struct SortedPoint
{
Point *p;
int i;
} SortedPoint;
static int
x_cmp(const void *a, const void *b)
{
SortedPoint *pa = (SortedPoint *) a;
SortedPoint *pb = (SortedPoint *) b;
if (pa->p->x == pb->p->x)
return 0;
return (pa->p->x > pb->p->x) ? 1 : -1;
}
static int
y_cmp(const void *a, const void *b)
{
SortedPoint *pa = (SortedPoint *) a;
SortedPoint *pb = (SortedPoint *) b;
if (pa->p->y == pb->p->y)
return 0;
return (pa->p->y > pb->p->y) ? 1 : -1;
}
Datum
spg_kd_picksplit(PG_FUNCTION_ARGS)
{
spgPickSplitIn *in = (spgPickSplitIn *) PG_GETARG_POINTER(0);
spgPickSplitOut *out = (spgPickSplitOut *) PG_GETARG_POINTER(1);
int i;
int middle;
SortedPoint *sorted;
double coord;
sorted = palloc(sizeof(*sorted) * in->nTuples);
for (i = 0; i < in->nTuples; i++)
{
sorted[i].p = DatumGetPointP(in->datums[i]);
sorted[i].i = i;
}
qsort(sorted, in->nTuples, sizeof(*sorted),
(in->level % 2) ? x_cmp : y_cmp);
middle = in->nTuples >> 1;
coord = (in->level % 2) ? sorted[middle].p->x : sorted[middle].p->y;
out->hasPrefix = true;
out->prefixDatum = Float8GetDatum(coord);
out->nNodes = 2;
out->nodeLabels = NULL; /* we don't need node labels */
out->mapTuplesToNodes = palloc(sizeof(int) * in->nTuples);
out->leafTupleDatums = palloc(sizeof(Datum) * in->nTuples);
/*
* Note: points that have coordinates exactly equal to coord may get
* classified into either node, depending on where they happen to fall
* in the sorted list. This is okay as long as the inner_consistent
* function descends into both sides for such cases. This is better
* than the alternative of trying to have an exact boundary, because
* it keeps the tree balanced even when we have many instances of the
* same point value. So we should never trigger the allTheSame logic.
*/
for (i = 0; i < in->nTuples; i++)
{
Point *p = sorted[i].p;
int n = sorted[i].i;
out->mapTuplesToNodes[n] = (i < middle) ? 0 : 1;
out->leafTupleDatums[n] = PointPGetDatum(p);
}
PG_RETURN_VOID();
}
Datum
spg_kd_inner_consistent(PG_FUNCTION_ARGS)
{
spgInnerConsistentIn *in = (spgInnerConsistentIn *) PG_GETARG_POINTER(0);
spgInnerConsistentOut *out = (spgInnerConsistentOut *) PG_GETARG_POINTER(1);
Point *query;
BOX *boxQuery;
double coord;
query = DatumGetPointP(in->query);
Assert(in->hasPrefix);
coord = DatumGetFloat8(in->prefixDatum);
if (in->allTheSame)
elog(ERROR, "allTheSame should not occur for k-d trees");
Assert(in->nNodes == 2);
out->nodeNumbers = (int *) palloc(sizeof(int) * 2);
out->levelAdds = (int *) palloc(sizeof(int) * 2);
out->levelAdds[0] = 1;
out->levelAdds[1] = 1;
out->nNodes = 0;
switch (in->strategy)
{
case RTLeftStrategyNumber:
out->nNodes = 1;
out->nodeNumbers[0] = 0;
if ((in->level % 2) == 0 || FPge(query->x, coord))
{
out->nodeNumbers[1] = 1;
out->nNodes++;
}
break;
case RTRightStrategyNumber:
out->nNodes = 1;
out->nodeNumbers[0] = 1;
if ((in->level % 2) == 0 || FPle(query->x, coord))
{
out->nodeNumbers[1] = 0;
out->nNodes++;
}
break;
case RTSameStrategyNumber:
if (in->level % 2)
{
if (FPle(query->x, coord))
{
out->nodeNumbers[out->nNodes] = 0;
out->nNodes++;
}
if (FPge(query->x, coord))
{
out->nodeNumbers[out->nNodes] = 1;
out->nNodes++;
}
}
else
{
if (FPle(query->y, coord))
{
out->nodeNumbers[out->nNodes] = 0;
out->nNodes++;
}
if (FPge(query->y, coord))
{
out->nodeNumbers[out->nNodes] = 1;
out->nNodes++;
}
}
break;
case RTBelowStrategyNumber:
out->nNodes = 1;
out->nodeNumbers[0] = 0;
if ((in->level % 2) == 1 || FPge(query->y, coord))
{
out->nodeNumbers[1] = 1;
out->nNodes++;
}
break;
case RTAboveStrategyNumber:
out->nNodes = 1;
out->nodeNumbers[0] = 1;
if ((in->level % 2) == 1 || FPle(query->y, coord))
{
out->nodeNumbers[1] = 0;
out->nNodes++;
}
break;
case RTContainedByStrategyNumber:
/*
* For this operator, the query is a box not a point. We cheat to
* the extent of assuming that DatumGetPointP won't do anything
* that would be bad for a pointer-to-box.
*/
boxQuery = DatumGetBoxP(in->query);
out->nNodes = 1;
if (in->level % 2)
{
if (FPlt(boxQuery->high.x, coord))
out->nodeNumbers[0] = 0;
else if (FPgt(boxQuery->low.x, coord))
out->nodeNumbers[0] = 1;
else
{
out->nodeNumbers[0] = 0;
out->nodeNumbers[1] = 1;
out->nNodes = 2;
}
}
else
{
if (FPlt(boxQuery->high.y, coord))
out->nodeNumbers[0] = 0;
else if (FPgt(boxQuery->low.y, coord))
out->nodeNumbers[0] = 1;
else
{
out->nodeNumbers[0] = 0;
out->nodeNumbers[1] = 1;
out->nNodes = 2;
}
}
break;
default:
elog(ERROR, "unrecognized strategy number: %d", in->strategy);
break;
}
PG_RETURN_VOID();
}
/*
* spg_kd_leaf_consistent() is the same as spg_quad_leaf_consistent(),
* since we support the same operators and the same leaf data type.
* So we just borrow that function.
*/

View File

@ -0,0 +1,360 @@
/*-------------------------------------------------------------------------
*
* spgquadtreeproc.c
* implementation of quad tree over points for SP-GiST
*
*
* Portions Copyright (c) 1996-2011, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
* IDENTIFICATION
* src/backend/access/spgist/spgquadtreeproc.c
*
*-------------------------------------------------------------------------
*/
#include "postgres.h"
#include "access/gist.h" /* for RTree strategy numbers */
#include "access/spgist.h"
#include "catalog/pg_type.h"
#include "utils/builtins.h"
#include "utils/geo_decls.h"
Datum
spg_quad_config(PG_FUNCTION_ARGS)
{
/* spgConfigIn *cfgin = (spgConfigIn *) PG_GETARG_POINTER(0); */
spgConfigOut *cfg = (spgConfigOut *) PG_GETARG_POINTER(1);
cfg->prefixType = POINTOID;
cfg->labelType = VOIDOID; /* we don't need node labels */
cfg->longValuesOK = false;
PG_RETURN_VOID();
}
#define SPTEST(f, x, y) \
DatumGetBool(DirectFunctionCall2(f, PointPGetDatum(x), PointPGetDatum(y)))
/*
* Determine which quadrant a point falls into, relative to the centroid.
*
* Quadrants are identified like this:
*
* 4 | 1
* ----+-----
* 3 | 2
*
* Points on one of the axes are taken to lie in the lowest-numbered
* adjacent quadrant.
*/
static int2
getQuadrant(Point *centroid, Point *tst)
{
if ((SPTEST(point_above, tst, centroid) ||
SPTEST(point_horiz, tst, centroid)) &&
(SPTEST(point_right, tst, centroid) ||
SPTEST(point_vert, tst, centroid)))
return 1;
if (SPTEST(point_below, tst, centroid) &&
(SPTEST(point_right, tst, centroid) ||
SPTEST(point_vert, tst, centroid)))
return 2;
if ((SPTEST(point_below, tst, centroid) ||
SPTEST(point_horiz, tst, centroid)) &&
SPTEST(point_left, tst, centroid))
return 3;
if (SPTEST(point_above, tst, centroid) &&
SPTEST(point_left, tst, centroid))
return 4;
elog(ERROR, "getQuadrant: impossible case");
return 0;
}
Datum
spg_quad_choose(PG_FUNCTION_ARGS)
{
spgChooseIn *in = (spgChooseIn *) PG_GETARG_POINTER(0);
spgChooseOut *out = (spgChooseOut *) PG_GETARG_POINTER(1);
Point *inPoint = DatumGetPointP(in->datum),
*centroid;
if (in->allTheSame)
{
out->resultType = spgMatchNode;
/* nodeN will be set by core */
out->result.matchNode.levelAdd = 0;
out->result.matchNode.restDatum = PointPGetDatum(inPoint);
PG_RETURN_VOID();
}
Assert(in->hasPrefix);
centroid = DatumGetPointP(in->prefixDatum);
Assert(in->nNodes == 4);
out->resultType = spgMatchNode;
out->result.matchNode.nodeN = getQuadrant(centroid, inPoint) - 1;
out->result.matchNode.levelAdd = 0;
out->result.matchNode.restDatum = PointPGetDatum(inPoint);
PG_RETURN_VOID();
}
#ifdef USE_MEDIAN
static int
x_cmp(const void *a, const void *b, void *arg)
{
Point *pa = *(Point **) a;
Point *pb = *(Point **) b;
if (pa->x == pb->x)
return 0;
return (pa->x > pb->x) ? 1 : -1;
}
static int
y_cmp(const void *a, const void *b, void *arg)
{
Point *pa = *(Point **) a;
Point *pb = *(Point **) b;
if (pa->y == pb->y)
return 0;
return (pa->y > pb->y) ? 1 : -1;
}
#endif
Datum
spg_quad_picksplit(PG_FUNCTION_ARGS)
{
spgPickSplitIn *in = (spgPickSplitIn *) PG_GETARG_POINTER(0);
spgPickSplitOut *out = (spgPickSplitOut *) PG_GETARG_POINTER(1);
int i;
Point *centroid;
#ifdef USE_MEDIAN
/* Use the median values of x and y as the centroid point */
Point **sorted;
sorted = palloc(sizeof(*sorted) * in->nTuples);
for (i = 0; i < in->nTuples; i++)
sorted[i] = DatumGetPointP(in->datums[i]);
centroid = palloc(sizeof(*centroid));
qsort(sorted, in->nTuples, sizeof(*sorted), x_cmp);
centroid->x = sorted[in->nTuples >> 1]->x;
qsort(sorted, in->nTuples, sizeof(*sorted), y_cmp);
centroid->y = sorted[in->nTuples >> 1]->y;
#else
/* Use the average values of x and y as the centroid point */
centroid = palloc0(sizeof(*centroid));
for (i = 0; i < in->nTuples; i++)
{
centroid->x += DatumGetPointP(in->datums[i])->x;
centroid->y += DatumGetPointP(in->datums[i])->y;
}
centroid->x /= in->nTuples;
centroid->y /= in->nTuples;
#endif
out->hasPrefix = true;
out->prefixDatum = PointPGetDatum(centroid);
out->nNodes = 4;
out->nodeLabels = NULL; /* we don't need node labels */
out->mapTuplesToNodes = palloc(sizeof(int) * in->nTuples);
out->leafTupleDatums = palloc(sizeof(Datum) * in->nTuples);
for (i = 0; i < in->nTuples; i++)
{
Point *p = DatumGetPointP(in->datums[i]);
int quadrant = getQuadrant(centroid, p) - 1;
out->leafTupleDatums[i] = PointPGetDatum(p);
out->mapTuplesToNodes[i] = quadrant;
}
PG_RETURN_VOID();
}
/* Subroutine to fill out->nodeNumbers[] for spg_quad_inner_consistent */
static void
setNodes(spgInnerConsistentOut *out, bool isAll, int first, int second)
{
if (isAll)
{
out->nNodes = 4;
out->nodeNumbers[0] = 0;
out->nodeNumbers[1] = 1;
out->nodeNumbers[2] = 2;
out->nodeNumbers[3] = 3;
}
else
{
out->nNodes = 2;
out->nodeNumbers[0] = first - 1;
out->nodeNumbers[1] = second - 1;
}
}
Datum
spg_quad_inner_consistent(PG_FUNCTION_ARGS)
{
spgInnerConsistentIn *in = (spgInnerConsistentIn *) PG_GETARG_POINTER(0);
spgInnerConsistentOut *out = (spgInnerConsistentOut *) PG_GETARG_POINTER(1);
Point *query,
*centroid;
BOX *boxQuery;
query = DatumGetPointP(in->query);
Assert(in->hasPrefix);
centroid = DatumGetPointP(in->prefixDatum);
if (in->allTheSame)
{
/* Report that all nodes should be visited */
int i;
out->nNodes = in->nNodes;
out->nodeNumbers = (int *) palloc(sizeof(int) * in->nNodes);
for (i = 0; i < in->nNodes; i++)
out->nodeNumbers[i] = i;
PG_RETURN_VOID();
}
Assert(in->nNodes == 4);
out->nodeNumbers = (int *) palloc(sizeof(int) * 4);
switch (in->strategy)
{
case RTLeftStrategyNumber:
setNodes(out, SPTEST(point_left, centroid, query), 3, 4);
break;
case RTRightStrategyNumber:
setNodes(out, SPTEST(point_right, centroid, query), 1, 2);
break;
case RTSameStrategyNumber:
out->nNodes = 1;
out->nodeNumbers[0] = getQuadrant(centroid, query) - 1;
break;
case RTBelowStrategyNumber:
setNodes(out, SPTEST(point_below, centroid, query), 2, 3);
break;
case RTAboveStrategyNumber:
setNodes(out, SPTEST(point_above, centroid, query), 1, 4);
break;
case RTContainedByStrategyNumber:
/*
* For this operator, the query is a box not a point. We cheat to
* the extent of assuming that DatumGetPointP won't do anything
* that would be bad for a pointer-to-box.
*/
boxQuery = DatumGetBoxP(in->query);
if (DatumGetBool(DirectFunctionCall2(box_contain_pt,
PointerGetDatum(boxQuery),
PointerGetDatum(centroid))))
{
/* centroid is in box, so descend to all quadrants */
setNodes(out, true, 0, 0);
}
else
{
/* identify quadrant(s) containing all corners of box */
Point p;
int i,
r = 0;
p = boxQuery->low;
r |= 1 << (getQuadrant(centroid, &p) - 1);
p.y = boxQuery->high.y;
r |= 1 << (getQuadrant(centroid, &p) - 1);
p = boxQuery->high;
r |= 1 << (getQuadrant(centroid, &p) - 1);
p.x = boxQuery->low.x;
r |= 1 << (getQuadrant(centroid, &p) - 1);
/* we must descend into those quadrant(s) */
out->nNodes = 0;
for (i = 0; i < 4; i++)
{
if (r & (1 << i))
{
out->nodeNumbers[out->nNodes] = i;
out->nNodes++;
}
}
}
break;
default:
elog(ERROR, "unrecognized strategy number: %d", in->strategy);
break;
}
PG_RETURN_VOID();
}
Datum
spg_quad_leaf_consistent(PG_FUNCTION_ARGS)
{
spgLeafConsistentIn *in = (spgLeafConsistentIn *) PG_GETARG_POINTER(0);
spgLeafConsistentOut *out = (spgLeafConsistentOut *) PG_GETARG_POINTER(1);
Point *query = DatumGetPointP(in->query);
Point *datum = DatumGetPointP(in->leafDatum);
bool res;
/* all tests are exact */
out->recheck = false;
switch (in->strategy)
{
case RTLeftStrategyNumber:
res = SPTEST(point_left, datum, query);
break;
case RTRightStrategyNumber:
res = SPTEST(point_right, datum, query);
break;
case RTSameStrategyNumber:
res = SPTEST(point_eq, datum, query);
break;
case RTBelowStrategyNumber:
res = SPTEST(point_below, datum, query);
break;
case RTAboveStrategyNumber:
res = SPTEST(point_above, datum, query);
break;
case RTContainedByStrategyNumber:
/*
* For this operator, the query is a box not a point. We cheat to
* the extent of assuming that DatumGetPointP won't do anything
* that would be bad for a pointer-to-box.
*/
res = SPTEST(box_contain_pt, query, datum);
break;
default:
elog(ERROR, "unrecognized strategy number: %d", in->strategy);
res = false;
break;
}
PG_RETURN_BOOL(res);
}

View File

@ -0,0 +1,543 @@
/*-------------------------------------------------------------------------
*
* spgscan.c
* routines for scanning SP-GiST indexes
*
*
* Portions Copyright (c) 1996-2011, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
* IDENTIFICATION
* src/backend/access/spgist/spgscan.c
*
*-------------------------------------------------------------------------
*/
#include "postgres.h"
#include "access/relscan.h"
#include "access/spgist_private.h"
#include "miscadmin.h"
#include "storage/bufmgr.h"
#include "utils/datum.h"
#include "utils/memutils.h"
typedef struct ScanStackEntry
{
Datum reconstructedValue; /* value reconstructed from parent */
int level; /* level of items on this page */
ItemPointerData ptr; /* block and offset to scan from */
} ScanStackEntry;
/* Free a ScanStackEntry */
static void
freeScanStackEntry(SpGistScanOpaque so, ScanStackEntry *stackEntry)
{
if (!so->state.attType.attbyval &&
DatumGetPointer(stackEntry->reconstructedValue) != NULL)
pfree(DatumGetPointer(stackEntry->reconstructedValue));
pfree(stackEntry);
}
/* Free the entire stack */
static void
freeScanStack(SpGistScanOpaque so)
{
ListCell *lc;
foreach(lc, so->scanStack)
{
freeScanStackEntry(so, (ScanStackEntry *) lfirst(lc));
}
list_free(so->scanStack);
so->scanStack = NIL;
}
/* Initialize scanStack with a single entry for the root page */
static void
resetSpGistScanOpaque(SpGistScanOpaque so)
{
ScanStackEntry *startEntry = palloc0(sizeof(ScanStackEntry));
ItemPointerSet(&startEntry->ptr, SPGIST_HEAD_BLKNO, FirstOffsetNumber);
freeScanStack(so);
so->scanStack = list_make1(startEntry);
so->nPtrs = so->iPtr = 0;
}
Datum
spgbeginscan(PG_FUNCTION_ARGS)
{
Relation rel = (Relation) PG_GETARG_POINTER(0);
int keysz = PG_GETARG_INT32(1);
/* ScanKey scankey = (ScanKey) PG_GETARG_POINTER(2); */
IndexScanDesc scan;
SpGistScanOpaque so;
scan = RelationGetIndexScan(rel, keysz, 0);
so = (SpGistScanOpaque) palloc0(sizeof(SpGistScanOpaqueData));
initSpGistState(&so->state, scan->indexRelation);
so->tempCxt = AllocSetContextCreate(CurrentMemoryContext,
"SP-GiST search temporary context",
ALLOCSET_DEFAULT_MINSIZE,
ALLOCSET_DEFAULT_INITSIZE,
ALLOCSET_DEFAULT_MAXSIZE);
resetSpGistScanOpaque(so);
scan->opaque = so;
PG_RETURN_POINTER(scan);
}
Datum
spgrescan(PG_FUNCTION_ARGS)
{
IndexScanDesc scan = (IndexScanDesc) PG_GETARG_POINTER(0);
SpGistScanOpaque so = (SpGistScanOpaque) scan->opaque;
ScanKey scankey = (ScanKey) PG_GETARG_POINTER(1);
if (scankey && scan->numberOfKeys > 0)
{
memmove(scan->keyData, scankey,
scan->numberOfKeys * sizeof(ScanKeyData));
}
resetSpGistScanOpaque(so);
PG_RETURN_VOID();
}
Datum
spgendscan(PG_FUNCTION_ARGS)
{
IndexScanDesc scan = (IndexScanDesc) PG_GETARG_POINTER(0);
SpGistScanOpaque so = (SpGistScanOpaque) scan->opaque;
MemoryContextDelete(so->tempCxt);
PG_RETURN_VOID();
}
Datum
spgmarkpos(PG_FUNCTION_ARGS)
{
elog(ERROR, "SPGiST does not support mark/restore");
PG_RETURN_VOID();
}
Datum
spgrestrpos(PG_FUNCTION_ARGS)
{
elog(ERROR, "SPGiST does not support mark/restore");
PG_RETURN_VOID();
}
/*
* Test whether a leaf datum satisfies all the scan keys
*
* *recheck is set true if any of the operators are lossy
*/
static bool
spgLeafTest(SpGistScanOpaque so, Datum leafDatum,
int level, Datum reconstructedValue,
bool *recheck)
{
bool result = true;
spgLeafConsistentIn in;
spgLeafConsistentOut out;
MemoryContext oldCtx;
int i;
*recheck = false;
/* set up values that are the same for all quals */
in.reconstructedValue = reconstructedValue;
in.level = level;
in.leafDatum = leafDatum;
/* Apply each leaf consistent function, working in the temp context */
oldCtx = MemoryContextSwitchTo(so->tempCxt);
for (i = 0; i < so->numberOfKeys; i++)
{
in.strategy = so->keyData[i].sk_strategy;
in.query = so->keyData[i].sk_argument;
out.recheck = false;
result = DatumGetBool(FunctionCall2Coll(&so->state.leafConsistentFn,
so->keyData[i].sk_collation,
PointerGetDatum(&in),
PointerGetDatum(&out)));
*recheck |= out.recheck;
if (!result)
break;
}
MemoryContextSwitchTo(oldCtx);
return result;
}
/*
* Walk the tree and report all tuples passing the scan quals to the storeRes
* subroutine.
*
* If scanWholeIndex is true, we'll do just that. If not, we'll stop at the
* next page boundary once we have reported at least one tuple.
*/
static void
spgWalk(Relation index, SpGistScanOpaque so, bool scanWholeIndex,
void (*storeRes) (SpGistScanOpaque, ItemPointer, bool))
{
Buffer buffer = InvalidBuffer;
bool reportedSome = false;
while (scanWholeIndex || !reportedSome)
{
ScanStackEntry *stackEntry;
BlockNumber blkno;
OffsetNumber offset;
Page page;
/* Pull next to-do item from the list */
if (so->scanStack == NIL)
break; /* there are no more pages to scan */
stackEntry = (ScanStackEntry *) linitial(so->scanStack);
so->scanStack = list_delete_first(so->scanStack);
redirect:
/* Check for interrupts, just in case of infinite loop */
CHECK_FOR_INTERRUPTS();
blkno = ItemPointerGetBlockNumber(&stackEntry->ptr);
offset = ItemPointerGetOffsetNumber(&stackEntry->ptr);
if (buffer == InvalidBuffer)
{
buffer = ReadBuffer(index, blkno);
LockBuffer(buffer, BUFFER_LOCK_SHARE);
}
else if (blkno != BufferGetBlockNumber(buffer))
{
UnlockReleaseBuffer(buffer);
buffer = ReadBuffer(index, blkno);
LockBuffer(buffer, BUFFER_LOCK_SHARE);
}
/* else new pointer points to the same page, no work needed */
page = BufferGetPage(buffer);
if (SpGistPageIsLeaf(page))
{
SpGistLeafTuple leafTuple;
OffsetNumber max = PageGetMaxOffsetNumber(page);
bool recheck = false;
if (blkno == SPGIST_HEAD_BLKNO)
{
/* When root is a leaf, examine all its tuples */
for (offset = FirstOffsetNumber; offset <= max; offset++)
{
leafTuple = (SpGistLeafTuple)
PageGetItem(page, PageGetItemId(page, offset));
if (leafTuple->tupstate != SPGIST_LIVE)
{
/* all tuples on root should be live */
elog(ERROR, "unexpected SPGiST tuple state: %d",
leafTuple->tupstate);
}
Assert(ItemPointerIsValid(&leafTuple->heapPtr));
if (spgLeafTest(so,
SGLTDATUM(leafTuple, &so->state),
stackEntry->level,
stackEntry->reconstructedValue,
&recheck))
{
storeRes(so, &leafTuple->heapPtr, recheck);
reportedSome = true;
}
}
}
else
{
/* Normal case: just examine the chain we arrived at */
while (offset != InvalidOffsetNumber)
{
Assert(offset >= FirstOffsetNumber && offset <= max);
leafTuple = (SpGistLeafTuple)
PageGetItem(page, PageGetItemId(page, offset));
if (leafTuple->tupstate != SPGIST_LIVE)
{
if (leafTuple->tupstate == SPGIST_REDIRECT)
{
/* redirection tuple should be first in chain */
Assert(offset == ItemPointerGetOffsetNumber(&stackEntry->ptr));
/* transfer attention to redirect point */
stackEntry->ptr = ((SpGistDeadTuple) leafTuple)->pointer;
Assert(ItemPointerGetBlockNumber(&stackEntry->ptr) != SPGIST_METAPAGE_BLKNO);
goto redirect;
}
if (leafTuple->tupstate == SPGIST_DEAD)
{
/* dead tuple should be first in chain */
Assert(offset == ItemPointerGetOffsetNumber(&stackEntry->ptr));
/* No live entries on this page */
Assert(leafTuple->nextOffset == InvalidOffsetNumber);
break;
}
/* We should not arrive at a placeholder */
elog(ERROR, "unexpected SPGiST tuple state: %d",
leafTuple->tupstate);
}
Assert(ItemPointerIsValid(&leafTuple->heapPtr));
if (spgLeafTest(so,
SGLTDATUM(leafTuple, &so->state),
stackEntry->level,
stackEntry->reconstructedValue,
&recheck))
{
storeRes(so, &leafTuple->heapPtr, recheck);
reportedSome = true;
}
offset = leafTuple->nextOffset;
}
}
}
else /* page is inner */
{
SpGistInnerTuple innerTuple;
SpGistNodeTuple node;
int i;
innerTuple = (SpGistInnerTuple) PageGetItem(page,
PageGetItemId(page, offset));
if (innerTuple->tupstate != SPGIST_LIVE)
{
if (innerTuple->tupstate == SPGIST_REDIRECT)
{
/* transfer attention to redirect point */
stackEntry->ptr = ((SpGistDeadTuple) innerTuple)->pointer;
Assert(ItemPointerGetBlockNumber(&stackEntry->ptr) != SPGIST_METAPAGE_BLKNO);
goto redirect;
}
elog(ERROR, "unexpected SPGiST tuple state: %d",
innerTuple->tupstate);
}
if (so->numberOfKeys == 0)
{
/*
* This case cannot happen at the moment, because we don't
* set pg_am.amoptionalkey for SP-GiST. In order for full
* index scans to produce correct answers, we'd need to
* index nulls, which we don't.
*/
Assert(false);
#ifdef NOT_USED
/*
* A full index scan could be done approximately like this,
* but note that reconstruction of indexed values would be
* impossible unless the API for inner_consistent is changed.
*/
SGITITERATE(innerTuple, i, node)
{
if (ItemPointerIsValid(&node->t_tid))
{
ScanStackEntry *newEntry = palloc(sizeof(ScanStackEntry));
newEntry->ptr = node->t_tid;
newEntry->level = -1;
newEntry->reconstructedValue = (Datum) 0;
so->scanStack = lcons(newEntry, so->scanStack);
}
}
#endif
}
else
{
spgInnerConsistentIn in;
spgInnerConsistentOut out;
SpGistNodeTuple *nodes;
int *andMap;
int *levelAdds;
Datum *reconstructedValues;
int j,
nMatches = 0;
MemoryContext oldCtx;
/* use temp context for calling inner_consistent */
oldCtx = MemoryContextSwitchTo(so->tempCxt);
/* set up values that are the same for all scankeys */
in.reconstructedValue = stackEntry->reconstructedValue;
in.level = stackEntry->level;
in.allTheSame = innerTuple->allTheSame;
in.hasPrefix = (innerTuple->prefixSize > 0);
in.prefixDatum = SGITDATUM(innerTuple, &so->state);
in.nNodes = innerTuple->nNodes;
in.nodeLabels = spgExtractNodeLabels(&so->state, innerTuple);
/* collect node pointers */
nodes = (SpGistNodeTuple *) palloc(sizeof(SpGistNodeTuple) * in.nNodes);
SGITITERATE(innerTuple, i, node)
{
nodes[i] = node;
}
andMap = (int *) palloc0(sizeof(int) * in.nNodes);
levelAdds = (int *) palloc0(sizeof(int) * in.nNodes);
reconstructedValues = (Datum *) palloc0(sizeof(Datum) * in.nNodes);
for (j = 0; j < so->numberOfKeys; j++)
{
in.strategy = so->keyData[j].sk_strategy;
in.query = so->keyData[j].sk_argument;
memset(&out, 0, sizeof(out));
FunctionCall2Coll(&so->state.innerConsistentFn,
so->keyData[j].sk_collation,
PointerGetDatum(&in),
PointerGetDatum(&out));
/* If allTheSame, they should all or none of 'em match */
if (innerTuple->allTheSame)
if (out.nNodes != 0 && out.nNodes != in.nNodes)
elog(ERROR, "inconsistent inner_consistent results for allTheSame inner tuple");
nMatches = 0;
for (i = 0; i < out.nNodes; i++)
{
int nodeN = out.nodeNumbers[i];
andMap[nodeN]++;
if (andMap[nodeN] == j + 1)
nMatches++;
if (out.levelAdds)
levelAdds[nodeN] = out.levelAdds[i];
if (out.reconstructedValues)
reconstructedValues[nodeN] = out.reconstructedValues[i];
}
/* quit as soon as all nodes have failed some qual */
if (nMatches == 0)
break;
}
MemoryContextSwitchTo(oldCtx);
if (nMatches > 0)
{
for (i = 0; i < in.nNodes; i++)
{
if (andMap[i] == so->numberOfKeys &&
ItemPointerIsValid(&nodes[i]->t_tid))
{
ScanStackEntry *newEntry;
/* Create new work item for this node */
newEntry = palloc(sizeof(ScanStackEntry));
newEntry->ptr = nodes[i]->t_tid;
newEntry->level = stackEntry->level + levelAdds[i];
/* Must copy value out of temp context */
newEntry->reconstructedValue =
datumCopy(reconstructedValues[i],
so->state.attType.attbyval,
so->state.attType.attlen);
so->scanStack = lcons(newEntry, so->scanStack);
}
}
}
}
}
/* done with this scan stack entry */
freeScanStackEntry(so, stackEntry);
/* clear temp context before proceeding to the next one */
MemoryContextReset(so->tempCxt);
}
if (buffer != InvalidBuffer)
UnlockReleaseBuffer(buffer);
}
/* storeRes subroutine for getbitmap case */
static void
storeBitmap(SpGistScanOpaque so, ItemPointer heapPtr, bool recheck)
{
tbm_add_tuples(so->tbm, heapPtr, 1, recheck);
so->ntids++;
}
Datum
spggetbitmap(PG_FUNCTION_ARGS)
{
IndexScanDesc scan = (IndexScanDesc) PG_GETARG_POINTER(0);
TIDBitmap *tbm = (TIDBitmap *) PG_GETARG_POINTER(1);
SpGistScanOpaque so = (SpGistScanOpaque) scan->opaque;
/* Copy scankey to *so so we don't need to pass it around separately */
so->numberOfKeys = scan->numberOfKeys;
so->keyData = scan->keyData;
so->tbm = tbm;
so->ntids = 0;
spgWalk(scan->indexRelation, so, true, storeBitmap);
PG_RETURN_INT64(so->ntids);
}
/* storeRes subroutine for gettuple case */
static void
storeGettuple(SpGistScanOpaque so, ItemPointer heapPtr, bool recheck)
{
Assert(so->nPtrs < MaxIndexTuplesPerPage);
so->heapPtrs[so->nPtrs] = *heapPtr;
so->recheck[so->nPtrs] = recheck;
so->nPtrs++;
}
Datum
spggettuple(PG_FUNCTION_ARGS)
{
IndexScanDesc scan = (IndexScanDesc) PG_GETARG_POINTER(0);
ScanDirection dir = (ScanDirection) PG_GETARG_INT32(1);
SpGistScanOpaque so = (SpGistScanOpaque) scan->opaque;
if (dir != ForwardScanDirection)
elog(ERROR, "SP-GiST only supports forward scan direction");
/* Copy scankey to *so so we don't need to pass it around separately */
so->numberOfKeys = scan->numberOfKeys;
so->keyData = scan->keyData;
for (;;)
{
if (so->iPtr < so->nPtrs)
{
/* continuing to return tuples from a leaf page */
scan->xs_ctup.t_self = so->heapPtrs[so->iPtr];
scan->xs_recheck = so->recheck[so->iPtr];
so->iPtr++;
PG_RETURN_BOOL(true);
}
so->iPtr = so->nPtrs = 0;
spgWalk(scan->indexRelation, so, false, storeGettuple);
if (so->nPtrs == 0)
break; /* must have completed scan */
}
PG_RETURN_BOOL(false);
}

View File

@ -0,0 +1,594 @@
/*-------------------------------------------------------------------------
*
* spgtextproc.c
* implementation of compressed-suffix tree over text
*
*
* Portions Copyright (c) 1996-2011, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
* IDENTIFICATION
* src/backend/access/spgist/spgtextproc.c
*
*-------------------------------------------------------------------------
*/
#include "postgres.h"
#include "access/spgist.h"
#include "catalog/pg_type.h"
#include "mb/pg_wchar.h"
#include "utils/builtins.h"
#include "utils/datum.h"
#include "utils/pg_locale.h"
/*
* In the worst case, a inner tuple in a text suffix tree could have as many
* as 256 nodes (one for each possible byte value). Each node can take 16
* bytes on MAXALIGN=8 machines. The inner tuple must fit on an index page
* of size BLCKSZ. Rather than assuming we know the exact amount of overhead
* imposed by page headers, tuple headers, etc, we leave 100 bytes for that
* (the actual overhead should be no more than 56 bytes at this writing, so
* there is slop in this number). The upshot is that the maximum safe prefix
* length is this:
*/
#define SPGIST_MAX_PREFIX_LENGTH (BLCKSZ - 256 * 16 - 100)
/* Struct for sorting values in picksplit */
typedef struct spgNodePtr
{
Datum d;
int i;
uint8 c;
} spgNodePtr;
Datum
spg_text_config(PG_FUNCTION_ARGS)
{
/* spgConfigIn *cfgin = (spgConfigIn *) PG_GETARG_POINTER(0); */
spgConfigOut *cfg = (spgConfigOut *) PG_GETARG_POINTER(1);
cfg->prefixType = TEXTOID;
cfg->labelType = CHAROID;
cfg->longValuesOK = true; /* suffixing will shorten long values */
PG_RETURN_VOID();
}
/*
* Form a text datum from the given not-necessarily-null-terminated string,
* using short varlena header format if possible
*/
static Datum
formTextDatum(const char *data, int datalen)
{
char *p;
p = (char *) palloc(datalen + VARHDRSZ);
if (datalen + VARHDRSZ_SHORT <= VARATT_SHORT_MAX)
{
SET_VARSIZE_SHORT(p, datalen + VARHDRSZ_SHORT);
if (datalen)
memcpy(p + VARHDRSZ_SHORT, data, datalen);
}
else
{
SET_VARSIZE(p, datalen + VARHDRSZ);
memcpy(p + VARHDRSZ, data, datalen);
}
return PointerGetDatum(p);
}
/*
* Find the length of the common prefix of a and b
*/
static int
commonPrefix(const char *a, const char *b, int lena, int lenb)
{
int i = 0;
while (i < lena && i < lenb && *a == *b)
{
a++;
b++;
i++;
}
return i;
}
/*
* Binary search an array of uint8 datums for a match to c
*
* On success, *i gets the match location; on failure, it gets where to insert
*/
static bool
searchChar(Datum *nodeLabels, int nNodes, uint8 c, int *i)
{
int StopLow = 0,
StopHigh = nNodes;
while (StopLow < StopHigh)
{
int StopMiddle = (StopLow + StopHigh) >> 1;
uint8 middle = DatumGetUInt8(nodeLabels[StopMiddle]);
if (c < middle)
StopHigh = StopMiddle;
else if (c > middle)
StopLow = StopMiddle + 1;
else
{
*i = StopMiddle;
return true;
}
}
*i = StopHigh;
return false;
}
Datum
spg_text_choose(PG_FUNCTION_ARGS)
{
spgChooseIn *in = (spgChooseIn *) PG_GETARG_POINTER(0);
spgChooseOut *out = (spgChooseOut *) PG_GETARG_POINTER(1);
text *inText = DatumGetTextPP(in->datum);
char *inStr = VARDATA_ANY(inText);
int inSize = VARSIZE_ANY_EXHDR(inText);
uint8 nodeChar = '\0';
int i = 0;
int commonLen = 0;
/* Check for prefix match, set nodeChar to first byte after prefix */
if (in->hasPrefix)
{
text *prefixText = DatumGetTextPP(in->prefixDatum);
char *prefixStr = VARDATA_ANY(prefixText);
int prefixSize = VARSIZE_ANY_EXHDR(prefixText);
commonLen = commonPrefix(inStr + in->level,
prefixStr,
inSize - in->level,
prefixSize);
if (commonLen == prefixSize)
{
if (inSize - in->level > commonLen)
nodeChar = *(uint8 *) (inStr + in->level + commonLen);
else
nodeChar = '\0';
}
else
{
/* Must split tuple because incoming value doesn't match prefix */
out->resultType = spgSplitTuple;
if (commonLen == 0)
{
out->result.splitTuple.prefixHasPrefix = false;
}
else
{
out->result.splitTuple.prefixHasPrefix = true;
out->result.splitTuple.prefixPrefixDatum =
formTextDatum(prefixStr, commonLen);
}
out->result.splitTuple.nodeLabel =
UInt8GetDatum(*(prefixStr + commonLen));
if (prefixSize - commonLen == 1)
{
out->result.splitTuple.postfixHasPrefix = false;
}
else
{
out->result.splitTuple.postfixHasPrefix = true;
out->result.splitTuple.postfixPrefixDatum =
formTextDatum(prefixStr + commonLen + 1,
prefixSize - commonLen - 1);
}
PG_RETURN_VOID();
}
}
else if (inSize > in->level)
{
nodeChar = *(uint8 *) (inStr + in->level);
}
else
{
nodeChar = '\0';
}
/* Look up nodeChar in the node label array */
if (searchChar(in->nodeLabels, in->nNodes, nodeChar, &i))
{
/*
* Descend to existing node. (If in->allTheSame, the core code will
* ignore our nodeN specification here, but that's OK. We still
* have to provide the correct levelAdd and restDatum values, and
* those are the same regardless of which node gets chosen by core.)
*/
out->resultType = spgMatchNode;
out->result.matchNode.nodeN = i;
out->result.matchNode.levelAdd = commonLen + 1;
if (inSize - in->level - commonLen - 1 > 0)
out->result.matchNode.restDatum =
formTextDatum(inStr + in->level + commonLen + 1,
inSize - in->level - commonLen - 1);
else
out->result.matchNode.restDatum =
formTextDatum(NULL, 0);
}
else if (in->allTheSame)
{
/*
* Can't use AddNode action, so split the tuple. The upper tuple
* has the same prefix as before and uses an empty node label for
* the lower tuple. The lower tuple has no prefix and the same
* node labels as the original tuple.
*/
out->resultType = spgSplitTuple;
out->result.splitTuple.prefixHasPrefix = in->hasPrefix;
out->result.splitTuple.prefixPrefixDatum = in->prefixDatum;
out->result.splitTuple.nodeLabel = UInt8GetDatum('\0');
out->result.splitTuple.postfixHasPrefix = false;
}
else
{
/* Add a node for the not-previously-seen nodeChar value */
out->resultType = spgAddNode;
out->result.addNode.nodeLabel = UInt8GetDatum(nodeChar);
out->result.addNode.nodeN = i;
}
PG_RETURN_VOID();
}
/* qsort comparator to sort spgNodePtr structs by "c" */
static int
cmpNodePtr(const void *a, const void *b)
{
const spgNodePtr *aa = (const spgNodePtr *) a;
const spgNodePtr *bb = (const spgNodePtr *) b;
if (aa->c == bb->c)
return 0;
else if (aa->c > bb->c)
return 1;
else
return -1;
}
Datum
spg_text_picksplit(PG_FUNCTION_ARGS)
{
spgPickSplitIn *in = (spgPickSplitIn *) PG_GETARG_POINTER(0);
spgPickSplitOut *out = (spgPickSplitOut *) PG_GETARG_POINTER(1);
text *text0 = DatumGetTextPP(in->datums[0]);
int i,
commonLen;
spgNodePtr *nodes;
/* Identify longest common prefix, if any */
commonLen = VARSIZE_ANY_EXHDR(text0);
for (i = 1; i < in->nTuples && commonLen > 0; i++)
{
text *texti = DatumGetTextPP(in->datums[i]);
int tmp = commonPrefix(VARDATA_ANY(text0),
VARDATA_ANY(texti),
VARSIZE_ANY_EXHDR(text0),
VARSIZE_ANY_EXHDR(texti));
if (tmp < commonLen)
commonLen = tmp;
}
/*
* Limit the prefix length, if necessary, to ensure that the resulting
* inner tuple will fit on a page.
*/
commonLen = Min(commonLen, SPGIST_MAX_PREFIX_LENGTH);
/* Set node prefix to be that string, if it's not empty */
if (commonLen == 0)
{
out->hasPrefix = false;
}
else
{
out->hasPrefix = true;
out->prefixDatum = formTextDatum(VARDATA_ANY(text0), commonLen);
}
/* Extract the node label (first non-common byte) from each value */
nodes = (spgNodePtr *) palloc(sizeof(spgNodePtr) * in->nTuples);
for (i = 0; i < in->nTuples; i++)
{
text *texti = DatumGetTextPP(in->datums[i]);
if (commonLen < VARSIZE_ANY_EXHDR(texti))
nodes[i].c = *(uint8 *) (VARDATA_ANY(texti) + commonLen);
else
nodes[i].c = '\0'; /* use \0 if string is all common */
nodes[i].i = i;
nodes[i].d = in->datums[i];
}
/*
* Sort by label bytes so that we can group the values into nodes. This
* also ensures that the nodes are ordered by label value, allowing the
* use of binary search in searchChar.
*/
qsort(nodes, in->nTuples, sizeof(*nodes), cmpNodePtr);
/* And emit results */
out->nNodes = 0;
out->nodeLabels = (Datum *) palloc(sizeof(Datum) * in->nTuples);
out->mapTuplesToNodes = (int *) palloc(sizeof(int) * in->nTuples);
out->leafTupleDatums = (Datum *) palloc(sizeof(Datum) * in->nTuples);
for (i = 0; i < in->nTuples; i++)
{
text *texti = DatumGetTextPP(nodes[i].d);
Datum leafD;
if (i == 0 || nodes[i].c != nodes[i - 1].c)
{
out->nodeLabels[out->nNodes] = UInt8GetDatum(nodes[i].c);
out->nNodes++;
}
if (commonLen < VARSIZE_ANY_EXHDR(texti))
leafD = formTextDatum(VARDATA_ANY(texti) + commonLen + 1,
VARSIZE_ANY_EXHDR(texti) - commonLen - 1);
else
leafD = formTextDatum(NULL, 0);
out->leafTupleDatums[nodes[i].i] = leafD;
out->mapTuplesToNodes[nodes[i].i] = out->nNodes - 1;
}
PG_RETURN_VOID();
}
Datum
spg_text_inner_consistent(PG_FUNCTION_ARGS)
{
spgInnerConsistentIn *in = (spgInnerConsistentIn *) PG_GETARG_POINTER(0);
spgInnerConsistentOut *out = (spgInnerConsistentOut *) PG_GETARG_POINTER(1);
StrategyNumber strategy = in->strategy;
text *inText;
int inSize;
int i;
text *reconstrText = NULL;
int maxReconstrLen = 0;
text *prefixText = NULL;
int prefixSize = 0;
/*
* If it's a collation-aware operator, but the collation is C, we can
* treat it as non-collation-aware.
*/
if (strategy > 10 &&
lc_collate_is_c(PG_GET_COLLATION()))
strategy -= 10;
inText = DatumGetTextPP(in->query);
inSize = VARSIZE_ANY_EXHDR(inText);
/*
* Reconstruct values represented at this tuple, including parent data,
* prefix of this tuple if any, and the node label if any. in->level
* should be the length of the previously reconstructed value, and the
* number of bytes added here is prefixSize or prefixSize + 1.
*
* Note: we assume that in->reconstructedValue isn't toasted and doesn't
* have a short varlena header. This is okay because it must have been
* created by a previous invocation of this routine, and we always emit
* long-format reconstructed values.
*/
Assert(in->level == 0 ? DatumGetPointer(in->reconstructedValue) == NULL :
VARSIZE_ANY_EXHDR(DatumGetPointer(in->reconstructedValue)) == in->level);
maxReconstrLen = in->level + 1;
if (in->hasPrefix)
{
prefixText = DatumGetTextPP(in->prefixDatum);
prefixSize = VARSIZE_ANY_EXHDR(prefixText);
maxReconstrLen += prefixSize;
}
reconstrText = palloc(VARHDRSZ + maxReconstrLen);
SET_VARSIZE(reconstrText, VARHDRSZ + maxReconstrLen);
if (in->level)
memcpy(VARDATA(reconstrText),
VARDATA(DatumGetPointer(in->reconstructedValue)),
in->level);
if (prefixSize)
memcpy(((char *) VARDATA(reconstrText)) + in->level,
VARDATA_ANY(prefixText),
prefixSize);
/* last byte of reconstrText will be filled in below */
/*
* Scan the child nodes. For each one, complete the reconstructed value
* and see if it's consistent with the query. If so, emit an entry into
* the output arrays.
*/
out->nodeNumbers = (int *) palloc(sizeof(int) * in->nNodes);
out->levelAdds = (int *) palloc(sizeof(int) * in->nNodes);
out->reconstructedValues = (Datum *) palloc(sizeof(Datum) * in->nNodes);
out->nNodes = 0;
for (i = 0; i < in->nNodes; i++)
{
uint8 nodeChar = DatumGetUInt8(in->nodeLabels[i]);
int thisLen;
int r;
bool res = false;
/* If nodeChar is zero, don't include it in data */
if (nodeChar == '\0')
thisLen = maxReconstrLen - 1;
else
{
((char *) VARDATA(reconstrText))[maxReconstrLen - 1] = nodeChar;
thisLen = maxReconstrLen;
}
r = memcmp(VARDATA(reconstrText), VARDATA_ANY(inText),
Min(inSize, thisLen));
switch (strategy)
{
case BTLessStrategyNumber:
case BTLessEqualStrategyNumber:
if (r <= 0)
res = true;
break;
case BTEqualStrategyNumber:
if (r == 0 && inSize >= thisLen)
res = true;
break;
case BTGreaterEqualStrategyNumber:
case BTGreaterStrategyNumber:
if (r >= 0)
res = true;
break;
case BTLessStrategyNumber + 10:
case BTLessEqualStrategyNumber + 10:
case BTGreaterEqualStrategyNumber + 10:
case BTGreaterStrategyNumber + 10:
/*
* with non-C collation we need to traverse whole tree :-(
*/
res = true;
break;
default:
elog(ERROR, "unrecognized strategy number: %d",
in->strategy);
break;
}
if (res)
{
out->nodeNumbers[out->nNodes] = i;
out->levelAdds[out->nNodes] = thisLen - in->level;
SET_VARSIZE(reconstrText, VARHDRSZ + thisLen);
out->reconstructedValues[out->nNodes] =
datumCopy(PointerGetDatum(reconstrText), false, -1);
out->nNodes++;
}
}
PG_RETURN_VOID();
}
Datum
spg_text_leaf_consistent(PG_FUNCTION_ARGS)
{
spgLeafConsistentIn *in = (spgLeafConsistentIn *) PG_GETARG_POINTER(0);
spgLeafConsistentOut *out = (spgLeafConsistentOut *) PG_GETARG_POINTER(1);
StrategyNumber strategy = in->strategy;
text *query = DatumGetTextPP(in->query);
int level = in->level;
text *leafValue,
*reconstrValue = NULL;
char *fullValue;
int fullLen;
int queryLen;
int r;
bool res;
/* all tests are exact */
out->recheck = false;
leafValue = DatumGetTextPP(in->leafDatum);
if (DatumGetPointer(in->reconstructedValue))
reconstrValue = DatumGetTextP(in->reconstructedValue);
Assert(level == 0 ? reconstrValue == NULL :
VARSIZE_ANY_EXHDR(reconstrValue) == level);
fullLen = level + VARSIZE_ANY_EXHDR(leafValue);
queryLen = VARSIZE_ANY_EXHDR(query);
/* For equality, we needn't reconstruct fullValue if not same length */
if (strategy == BTEqualStrategyNumber && queryLen != fullLen)
PG_RETURN_BOOL(false);
/* Else, reconstruct the full string represented by this leaf tuple */
if (VARSIZE_ANY_EXHDR(leafValue) == 0 && level > 0)
{
fullValue = VARDATA(reconstrValue);
}
else
{
fullValue = palloc(fullLen);
if (level)
memcpy(fullValue, VARDATA(reconstrValue), level);
if (VARSIZE_ANY_EXHDR(leafValue) > 0)
memcpy(fullValue + level, VARDATA_ANY(leafValue),
VARSIZE_ANY_EXHDR(leafValue));
}
/* Run the appropriate type of comparison */
if (strategy > 10)
{
/* Collation-aware comparison */
strategy -= 10;
/* If asserts are enabled, verify encoding of reconstructed string */
Assert(pg_verifymbstr(fullValue, fullLen, false));
r = varstr_cmp(fullValue, Min(queryLen, fullLen),
VARDATA_ANY(query), Min(queryLen, fullLen),
PG_GET_COLLATION());
}
else
{
/* Non-collation-aware comparison */
r = memcmp(fullValue, VARDATA_ANY(query), Min(queryLen, fullLen));
}
if (r == 0)
{
if (queryLen > fullLen)
r = -1;
else if (queryLen < fullLen)
r = 1;
}
switch (strategy)
{
case BTLessStrategyNumber:
res = (r < 0);
break;
case BTLessEqualStrategyNumber:
res = (r <= 0);
break;
case BTEqualStrategyNumber:
res = (r == 0);
break;
case BTGreaterEqualStrategyNumber:
res = (r >= 0);
break;
case BTGreaterStrategyNumber:
res = (r > 0);
break;
default:
elog(ERROR, "unrecognized strategy number: %d", in->strategy);
res = false;
break;
}
PG_RETURN_BOOL(res);
}

View File

@ -0,0 +1,850 @@
/*-------------------------------------------------------------------------
*
* spgutils.c
* various support functions for SP-GiST
*
*
* Portions Copyright (c) 1996-2011, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
* IDENTIFICATION
* src/backend/access/spgist/spgutils.c
*
*-------------------------------------------------------------------------
*/
#include "postgres.h"
#include "access/genam.h"
#include "access/reloptions.h"
#include "access/spgist_private.h"
#include "access/transam.h"
#include "access/xact.h"
#include "storage/bufmgr.h"
#include "storage/indexfsm.h"
#include "storage/lmgr.h"
#include "utils/lsyscache.h"
/* Fill in a SpGistTypeDesc struct with info about the specified data type */
static void
fillTypeDesc(SpGistTypeDesc *desc, Oid type)
{
desc->type = type;
get_typlenbyval(type, &desc->attlen, &desc->attbyval);
}
/* Initialize SpGistState for working with the given index */
void
initSpGistState(SpGistState *state, Relation index)
{
Oid atttype;
spgConfigIn in;
/* SPGiST doesn't support multi-column indexes */
Assert(index->rd_att->natts == 1);
/*
* Get the actual data type of the indexed column from the index tupdesc.
* We pass this to the opclass config function so that polymorphic
* opclasses are possible.
*/
atttype = index->rd_att->attrs[0]->atttypid;
/* Get the config info for the opclass */
in.attType = atttype;
memset(&state->config, 0, sizeof(state->config));
FunctionCall2Coll(index_getprocinfo(index, 1, SPGIST_CONFIG_PROC),
index->rd_indcollation[0],
PointerGetDatum(&in),
PointerGetDatum(&state->config));
/* Get the information we need about each relevant datatype */
fillTypeDesc(&state->attType, atttype);
fillTypeDesc(&state->attPrefixType, state->config.prefixType);
fillTypeDesc(&state->attLabelType, state->config.labelType);
/* Get lookup info for opclass support procs */
fmgr_info_copy(&(state->chooseFn),
index_getprocinfo(index, 1, SPGIST_CHOOSE_PROC),
CurrentMemoryContext);
fmgr_info_copy(&(state->picksplitFn),
index_getprocinfo(index, 1, SPGIST_PICKSPLIT_PROC),
CurrentMemoryContext);
fmgr_info_copy(&(state->innerConsistentFn),
index_getprocinfo(index, 1, SPGIST_INNER_CONSISTENT_PROC),
CurrentMemoryContext);
fmgr_info_copy(&(state->leafConsistentFn),
index_getprocinfo(index, 1, SPGIST_LEAF_CONSISTENT_PROC),
CurrentMemoryContext);
/* Make workspace for constructing dead tuples */
state->deadTupleStorage = palloc0(SGDTSIZE);
/* Set XID to use in redirection tuples */
state->myXid = GetTopTransactionIdIfAny();
state->isBuild = false;
}
/*
* Allocate a new page (either by recycling, or by extending the index file).
*
* The returned buffer is already pinned and exclusive-locked.
* Caller is responsible for initializing the page by calling SpGistInitBuffer.
*/
Buffer
SpGistNewBuffer(Relation index)
{
Buffer buffer;
bool needLock;
/* First, try to get a page from FSM */
for (;;)
{
BlockNumber blkno = GetFreeIndexPage(index);
if (blkno == InvalidBlockNumber)
break; /* nothing known to FSM */
/*
* The root page shouldn't ever be listed in FSM, but just in case it
* is, ignore it.
*/
if (blkno == SPGIST_HEAD_BLKNO)
continue;
buffer = ReadBuffer(index, blkno);
/*
* We have to guard against the possibility that someone else already
* recycled this page; the buffer may be locked if so.
*/
if (ConditionalLockBuffer(buffer))
{
Page page = BufferGetPage(buffer);
if (PageIsNew(page))
return buffer; /* OK to use, if never initialized */
if (SpGistPageIsDeleted(page) || PageIsEmpty(page))
return buffer; /* OK to use */
LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
}
/* Can't use it, so release buffer and try again */
ReleaseBuffer(buffer);
}
/* Must extend the file */
needLock = !RELATION_IS_LOCAL(index);
if (needLock)
LockRelationForExtension(index, ExclusiveLock);
buffer = ReadBuffer(index, P_NEW);
LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
if (needLock)
UnlockRelationForExtension(index, ExclusiveLock);
return buffer;
}
/*
* Fetch local cache of lastUsedPages info, initializing it from the metapage
* if necessary
*/
static SpGistCache *
spgGetCache(Relation index)
{
SpGistCache *cache;
if (index->rd_amcache == NULL)
{
Buffer metabuffer;
SpGistMetaPageData *metadata;
cache = MemoryContextAlloc(index->rd_indexcxt,
sizeof(SpGistCache));
metabuffer = ReadBuffer(index, SPGIST_METAPAGE_BLKNO);
LockBuffer(metabuffer, BUFFER_LOCK_SHARE);
metadata = SpGistPageGetMeta(BufferGetPage(metabuffer));
if (metadata->magicNumber != SPGIST_MAGIC_NUMBER)
elog(ERROR, "index \"%s\" is not an SP-GiST index",
RelationGetRelationName(index));
*cache = metadata->lastUsedPages;
UnlockReleaseBuffer(metabuffer);
index->rd_amcache = cache;
}
else
{
cache = (SpGistCache *) index->rd_amcache;
}
return cache;
}
/*
* Update index metapage's lastUsedPages info from local cache, if possible
*
* Updating meta page isn't critical for index working, so
* 1 use ConditionalLockBuffer to improve concurrency
* 2 don't WAL-log metabuffer changes to decrease WAL traffic
*/
void
SpGistUpdateMetaPage(Relation index)
{
SpGistCache *cache = (SpGistCache *) index->rd_amcache;
if (cache != NULL)
{
Buffer metabuffer;
SpGistMetaPageData *metadata;
metabuffer = ReadBuffer(index, SPGIST_METAPAGE_BLKNO);
if (ConditionalLockBuffer(metabuffer))
{
metadata = SpGistPageGetMeta(BufferGetPage(metabuffer));
metadata->lastUsedPages = *cache;
MarkBufferDirty(metabuffer);
UnlockReleaseBuffer(metabuffer);
}
else
{
ReleaseBuffer(metabuffer);
}
}
}
/* Macro to select proper element of lastUsedPages cache depending on flags */
#define GET_LUP(c, f) (((f) & GBUF_LEAF) ? \
&(c)->leafPage : \
&(c)->innerPage[(f) & GBUF_PARITY_MASK])
/*
* Allocate and initialize a new buffer of the type and parity specified by
* flags. The returned buffer is already pinned and exclusive-locked.
*
* When requesting an inner page, if we get one with the wrong parity,
* we just release the buffer and try again. We will get a different page
* because GetFreeIndexPage will have marked the page used in FSM. The page
* is entered in our local lastUsedPages cache, so there's some hope of
* making use of it later in this session, but otherwise we rely on VACUUM
* to eventually re-enter the page in FSM, making it available for recycling.
* Note that such a page does not get marked dirty here, so unless it's used
* fairly soon, the buffer will just get discarded and the page will remain
* as it was on disk.
*
* When we return a buffer to the caller, the page is *not* entered into
* the lastUsedPages cache; we expect the caller will do so after it's taken
* whatever space it will use. This is because after the caller has used up
* some space, the page might have less space than whatever was cached already
* so we'd rather not trash the old cache entry.
*/
static Buffer
allocNewBuffer(Relation index, int flags)
{
SpGistCache *cache = spgGetCache(index);
for (;;)
{
Buffer buffer;
buffer = SpGistNewBuffer(index);
SpGistInitBuffer(buffer, (flags & GBUF_LEAF) ? SPGIST_LEAF : 0);
if (flags & GBUF_LEAF)
{
/* Leaf pages have no parity concerns, so just use it */
return buffer;
}
else
{
BlockNumber blkno = BufferGetBlockNumber(buffer);
int blkParity = blkno % 3;
if ((flags & GBUF_PARITY_MASK) == blkParity)
{
/* Page has right parity, use it */
return buffer;
}
else
{
/* Page has wrong parity, record it in cache and try again */
cache->innerPage[blkParity].blkno = blkno;
cache->innerPage[blkParity].freeSpace =
PageGetExactFreeSpace(BufferGetPage(buffer));
UnlockReleaseBuffer(buffer);
}
}
}
}
/*
* Get a buffer of the type and parity specified by flags, having at least
* as much free space as indicated by needSpace. We use the lastUsedPages
* cache to assign the same buffer previously requested when possible.
* The returned buffer is already pinned and exclusive-locked.
*
* *isNew is set true if the page was initialized here, false if it was
* already valid.
*/
Buffer
SpGistGetBuffer(Relation index, int flags, int needSpace, bool *isNew)
{
SpGistCache *cache = spgGetCache(index);
SpGistLastUsedPage *lup;
/* Bail out if even an empty page wouldn't meet the demand */
if (needSpace > SPGIST_PAGE_CAPACITY)
elog(ERROR, "desired SPGiST tuple size is too big");
/*
* If possible, increase the space request to include relation's
* fillfactor. This ensures that when we add unrelated tuples to a page,
* we try to keep 100-fillfactor% available for adding tuples that are
* related to the ones already on it. But fillfactor mustn't cause an
* error for requests that would otherwise be legal.
*/
needSpace += RelationGetTargetPageFreeSpace(index,
SPGIST_DEFAULT_FILLFACTOR);
needSpace = Min(needSpace, SPGIST_PAGE_CAPACITY);
/* Get the cache entry for this flags setting */
lup = GET_LUP(cache, flags);
/* If we have nothing cached, just turn it over to allocNewBuffer */
if (lup->blkno == InvalidBlockNumber)
{
*isNew = true;
return allocNewBuffer(index, flags);
}
/* root page should never be in cache */
Assert(lup->blkno != SPGIST_HEAD_BLKNO);
/* If cached freeSpace isn't enough, don't bother looking at the page */
if (lup->freeSpace >= needSpace)
{
Buffer buffer;
Page page;
buffer = ReadBuffer(index, lup->blkno);
if (!ConditionalLockBuffer(buffer))
{
/*
* buffer is locked by another process, so return a new buffer
*/
ReleaseBuffer(buffer);
*isNew = true;
return allocNewBuffer(index, flags);
}
page = BufferGetPage(buffer);
if (PageIsNew(page) || SpGistPageIsDeleted(page) || PageIsEmpty(page))
{
/* OK to initialize the page */
SpGistInitBuffer(buffer, (flags & GBUF_LEAF) ? SPGIST_LEAF : 0);
lup->freeSpace = PageGetExactFreeSpace(page) - needSpace;
*isNew = true;
return buffer;
}
/*
* Check that page is of right type and has enough space. We must
* recheck this since our cache isn't necessarily up to date.
*/
if ((flags & GBUF_LEAF) ? SpGistPageIsLeaf(page) :
!SpGistPageIsLeaf(page))
{
int freeSpace = PageGetExactFreeSpace(page);
if (freeSpace >= needSpace)
{
/* Success, update freespace info and return the buffer */
lup->freeSpace = freeSpace - needSpace;
*isNew = false;
return buffer;
}
}
/*
* fallback to allocation of new buffer
*/
UnlockReleaseBuffer(buffer);
}
/* No success with cache, so return a new buffer */
*isNew = true;
return allocNewBuffer(index, flags);
}
/*
* Update lastUsedPages cache when done modifying a page.
*
* We update the appropriate cache entry if it already contained this page
* (its freeSpace is likely obsolete), or if this page has more space than
* whatever we had cached.
*/
void
SpGistSetLastUsedPage(Relation index, Buffer buffer)
{
SpGistCache *cache = spgGetCache(index);
SpGistLastUsedPage *lup;
int freeSpace;
Page page = BufferGetPage(buffer);
BlockNumber blkno = BufferGetBlockNumber(buffer);
int flags;
/* Never enter the root page in cache, though */
if (blkno == SPGIST_HEAD_BLKNO)
return;
if (SpGistPageIsLeaf(page))
flags = GBUF_LEAF;
else
flags = GBUF_INNER_PARITY(blkno);
lup = GET_LUP(cache, flags);
freeSpace = PageGetExactFreeSpace(page);
if (lup->blkno == InvalidBlockNumber || lup->blkno == blkno ||
lup->freeSpace < freeSpace)
{
lup->blkno = blkno;
lup->freeSpace = freeSpace;
}
}
/*
* Initialize an SPGiST page to empty, with specified flags
*/
void
SpGistInitPage(Page page, uint16 f)
{
SpGistPageOpaque opaque;
PageInit(page, BLCKSZ, MAXALIGN(sizeof(SpGistPageOpaqueData)));
opaque = SpGistPageGetOpaque(page);
memset(opaque, 0, sizeof(SpGistPageOpaqueData));
opaque->flags = f;
opaque->spgist_page_id = SPGIST_PAGE_ID;
}
/*
* Initialize a buffer's page to empty, with specified flags
*/
void
SpGistInitBuffer(Buffer b, uint16 f)
{
Assert(BufferGetPageSize(b) == BLCKSZ);
SpGistInitPage(BufferGetPage(b), f);
}
/*
* Initialize metadata page
*/
void
SpGistInitMetapage(Page page)
{
SpGistMetaPageData *metadata;
SpGistInitPage(page, SPGIST_META);
metadata = SpGistPageGetMeta(page);
memset(metadata, 0, sizeof(SpGistMetaPageData));
metadata->magicNumber = SPGIST_MAGIC_NUMBER;
/* initialize last-used-page cache to empty */
metadata->lastUsedPages.innerPage[0].blkno = InvalidBlockNumber;
metadata->lastUsedPages.innerPage[1].blkno = InvalidBlockNumber;
metadata->lastUsedPages.innerPage[2].blkno = InvalidBlockNumber;
metadata->lastUsedPages.leafPage.blkno = InvalidBlockNumber;
}
/*
* reloptions processing for SPGiST
*/
Datum
spgoptions(PG_FUNCTION_ARGS)
{
Datum reloptions = PG_GETARG_DATUM(0);
bool validate = PG_GETARG_BOOL(1);
bytea *result;
result = default_reloptions(reloptions, validate, RELOPT_KIND_SPGIST);
if (result)
PG_RETURN_BYTEA_P(result);
PG_RETURN_NULL();
}
/*
* Get the space needed to store a datum of the indicated type.
* Note the result is already rounded up to a MAXALIGN boundary.
* Also, we follow the SPGiST convention that pass-by-val types are
* just stored in their Datum representation (compare memcpyDatum).
*/
unsigned int
SpGistGetTypeSize(SpGistTypeDesc *att, Datum datum)
{
unsigned int size;
if (att->attbyval)
size = sizeof(Datum);
else if (att->attlen > 0)
size = att->attlen;
else
size = VARSIZE_ANY(datum);
return MAXALIGN(size);
}
/*
* Copy the given datum to *target
*/
static void
memcpyDatum(void *target, SpGistTypeDesc *att, Datum datum)
{
unsigned int size;
if (att->attbyval)
{
memcpy(target, &datum, sizeof(Datum));
}
else
{
size = (att->attlen > 0) ? att->attlen : VARSIZE_ANY(datum);
memcpy(target, DatumGetPointer(datum), size);
}
}
/*
* Construct a leaf tuple containing the given heap TID and datum value
*/
SpGistLeafTuple
spgFormLeafTuple(SpGistState *state, ItemPointer heapPtr, Datum datum)
{
SpGistLeafTuple tup;
unsigned int size;
/* compute space needed (note result is already maxaligned) */
size = SGLTHDRSZ + SpGistGetTypeSize(&state->attType, datum);
/*
* Ensure that we can replace the tuple with a dead tuple later. This
* test is unnecessary given current tuple layouts, but let's be safe.
*/
if (size < SGDTSIZE)
size = SGDTSIZE;
/* OK, form the tuple */
tup = (SpGistLeafTuple) palloc0(size);
tup->size = size;
tup->nextOffset = InvalidOffsetNumber;
tup->heapPtr = *heapPtr;
memcpyDatum(SGLTDATAPTR(tup), &state->attType, datum);
return tup;
}
/*
* Construct a node (to go into an inner tuple) containing the given label
*
* Note that the node's downlink is just set invalid here. Caller will fill
* it in later.
*/
SpGistNodeTuple
spgFormNodeTuple(SpGistState *state, Datum label, bool isnull)
{
SpGistNodeTuple tup;
unsigned int size;
unsigned short infomask = 0;
/* compute space needed (note result is already maxaligned) */
size = SGNTHDRSZ;
if (!isnull)
size += SpGistGetTypeSize(&state->attLabelType, label);
/*
* Here we make sure that the size will fit in the field reserved for it
* in t_info.
*/
if ((size & INDEX_SIZE_MASK) != size)
ereport(ERROR,
(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
errmsg("index row requires %lu bytes, maximum size is %lu",
(unsigned long) size,
(unsigned long) INDEX_SIZE_MASK)));
tup = (SpGistNodeTuple) palloc0(size);
if (isnull)
infomask |= INDEX_NULL_MASK;
/* we don't bother setting the INDEX_VAR_MASK bit */
infomask |= size;
tup->t_info = infomask;
/* The TID field will be filled in later */
ItemPointerSetInvalid(&tup->t_tid);
if (!isnull)
memcpyDatum(SGNTDATAPTR(tup), &state->attLabelType, label);
return tup;
}
/*
* Construct an inner tuple containing the given prefix and node array
*/
SpGistInnerTuple
spgFormInnerTuple(SpGistState *state, bool hasPrefix, Datum prefix,
int nNodes, SpGistNodeTuple *nodes)
{
SpGistInnerTuple tup;
unsigned int size;
unsigned int prefixSize;
int i;
char *ptr;
/* Compute size needed */
if (hasPrefix)
prefixSize = SpGistGetTypeSize(&state->attPrefixType, prefix);
else
prefixSize = 0;
size = SGITHDRSZ + prefixSize;
/* Note: we rely on node tuple sizes to be maxaligned already */
for (i = 0; i < nNodes; i++)
size += IndexTupleSize(nodes[i]);
/*
* Ensure that we can replace the tuple with a dead tuple later. This
* test is unnecessary given current tuple layouts, but let's be safe.
*/
if (size < SGDTSIZE)
size = SGDTSIZE;
/*
* Inner tuple should be small enough to fit on a page
*/
if (size > SPGIST_PAGE_CAPACITY - sizeof(ItemIdData))
ereport(ERROR,
(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
errmsg("SPGiST inner tuple size %lu exceeds maximum %lu",
(unsigned long) size,
(unsigned long) (SPGIST_PAGE_CAPACITY - sizeof(ItemIdData))),
errhint("Values larger than a buffer page cannot be indexed.")));
/*
* Check for overflow of header fields --- probably can't fail if the
* above succeeded, but let's be paranoid
*/
if (size > SGITMAXSIZE ||
prefixSize > SGITMAXPREFIXSIZE ||
nNodes > SGITMAXNNODES)
elog(ERROR, "SPGiST inner tuple header field is too small");
/* OK, form the tuple */
tup = (SpGistInnerTuple) palloc0(size);
tup->nNodes = nNodes;
tup->prefixSize = prefixSize;
tup->size = size;
if (hasPrefix)
memcpyDatum(SGITDATAPTR(tup), &state->attPrefixType, prefix);
ptr = (char *) SGITNODEPTR(tup);
for (i = 0; i < nNodes; i++)
{
SpGistNodeTuple node = nodes[i];
memcpy(ptr, node, IndexTupleSize(node));
ptr += IndexTupleSize(node);
}
return tup;
}
/*
* Construct a "dead" tuple to replace a tuple being deleted.
*
* The state can be SPGIST_REDIRECT, SPGIST_DEAD, or SPGIST_PLACEHOLDER.
* For a REDIRECT tuple, a pointer (blkno+offset) must be supplied, and
* the xid field is filled in automatically.
*
* This is called in critical sections, so we don't use palloc; the tuple
* is built in preallocated storage. It should be copied before another
* call with different parameters can occur.
*/
SpGistDeadTuple
spgFormDeadTuple(SpGistState *state, int tupstate,
BlockNumber blkno, OffsetNumber offnum)
{
SpGistDeadTuple tuple = (SpGistDeadTuple) state->deadTupleStorage;
tuple->tupstate = tupstate;
tuple->size = SGDTSIZE;
tuple->nextOffset = InvalidOffsetNumber;
if (tupstate == SPGIST_REDIRECT)
{
ItemPointerSet(&tuple->pointer, blkno, offnum);
tuple->xid = state->myXid;
}
else
{
ItemPointerSetInvalid(&tuple->pointer);
tuple->xid = InvalidTransactionId;
}
return tuple;
}
/*
* Extract the label datums of the nodes within innerTuple
*
* Returns NULL if label datums are NULLs
*/
Datum *
spgExtractNodeLabels(SpGistState *state, SpGistInnerTuple innerTuple)
{
Datum *nodeLabels;
int nullcount = 0;
int i;
SpGistNodeTuple node;
nodeLabels = (Datum *) palloc(sizeof(Datum) * innerTuple->nNodes);
SGITITERATE(innerTuple, i, node)
{
if (IndexTupleHasNulls(node))
nullcount++;
else
nodeLabels[i] = SGNTDATUM(node, state);
}
if (nullcount == innerTuple->nNodes)
{
/* They're all null, so just return NULL */
pfree(nodeLabels);
return NULL;
}
if (nullcount != 0)
elog(ERROR, "some but not all node labels are null in SPGiST inner tuple");
return nodeLabels;
}
/*
* Add a new item to the page, replacing a PLACEHOLDER item if possible.
* Return the location it's inserted at, or InvalidOffsetNumber on failure.
*
* If startOffset isn't NULL, we start searching for placeholders at
* *startOffset, and update that to the next place to search. This is just
* an optimization for repeated insertions.
*
* If errorOK is false, we throw error when there's not enough room,
* rather than returning InvalidOffsetNumber.
*/
OffsetNumber
SpGistPageAddNewItem(SpGistState *state, Page page, Item item, Size size,
OffsetNumber *startOffset, bool errorOK)
{
SpGistPageOpaque opaque = SpGistPageGetOpaque(page);
OffsetNumber i,
maxoff,
offnum;
if (opaque->nPlaceholder > 0 &&
PageGetExactFreeSpace(page) + SGDTSIZE >= MAXALIGN(size))
{
/* Try to replace a placeholder */
maxoff = PageGetMaxOffsetNumber(page);
offnum = InvalidOffsetNumber;
for (;;)
{
if (startOffset && *startOffset != InvalidOffsetNumber)
i = *startOffset;
else
i = FirstOffsetNumber;
for (; i <= maxoff; i++)
{
SpGistDeadTuple it = (SpGistDeadTuple) PageGetItem(page,
PageGetItemId(page, i));
if (it->tupstate == SPGIST_PLACEHOLDER)
{
offnum = i;
break;
}
}
/* Done if we found a placeholder */
if (offnum != InvalidOffsetNumber)
break;
if (startOffset && *startOffset != InvalidOffsetNumber)
{
/* Hint was no good, re-search from beginning */
*startOffset = InvalidOffsetNumber;
continue;
}
/* Hmm, no placeholder found? */
opaque->nPlaceholder = 0;
break;
}
if (offnum != InvalidOffsetNumber)
{
/* Replace the placeholder tuple */
PageIndexTupleDelete(page, offnum);
offnum = PageAddItem(page, item, size, offnum, false, false);
/*
* We should not have failed given the size check at the top of
* the function, but test anyway. If we did fail, we must PANIC
* because we've already deleted the placeholder tuple, and
* there's no other way to keep the damage from getting to disk.
*/
if (offnum != InvalidOffsetNumber)
{
Assert(opaque->nPlaceholder > 0);
opaque->nPlaceholder--;
if (startOffset)
*startOffset = offnum + 1;
}
else
elog(PANIC, "failed to add item of size %u to SPGiST index page",
size);
return offnum;
}
}
/* No luck in replacing a placeholder, so just add it to the page */
offnum = PageAddItem(page, item, size,
InvalidOffsetNumber, false, false);
if (offnum == InvalidOffsetNumber && !errorOK)
elog(ERROR, "failed to add item of size %u to SPGiST index page",
size);
return offnum;
}

View File

@ -0,0 +1,755 @@
/*-------------------------------------------------------------------------
*
* spgvacuum.c
* vacuum for SP-GiST
*
*
* Portions Copyright (c) 1996-2011, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
* IDENTIFICATION
* src/backend/access/spgist/spgvacuum.c
*
*-------------------------------------------------------------------------
*/
#include "postgres.h"
#include "access/genam.h"
#include "access/spgist_private.h"
#include "access/transam.h"
#include "catalog/storage.h"
#include "commands/vacuum.h"
#include "miscadmin.h"
#include "storage/bufmgr.h"
#include "storage/indexfsm.h"
#include "storage/lmgr.h"
#include "storage/procarray.h"
/* local state for vacuum operations */
typedef struct spgBulkDeleteState
{
/* Parameters passed in to spgvacuumscan */
IndexVacuumInfo *info;
IndexBulkDeleteResult *stats;
IndexBulkDeleteCallback callback;
void *callback_state;
/* Additional working state */
SpGistState spgstate;
TransactionId OldestXmin;
BlockNumber lastFilledBlock;
} spgBulkDeleteState;
/*
* Vacuum a regular (non-root) leaf page
*
* We must delete tuples that are targeted for deletion by the VACUUM,
* but not move any tuples that are referenced by outside links; we assume
* those are the ones that are heads of chains.
*/
static void
vacuumLeafPage(spgBulkDeleteState *bds, Relation index, Buffer buffer)
{
Page page = BufferGetPage(buffer);
spgxlogVacuumLeaf xlrec;
XLogRecData rdata[8];
OffsetNumber toDead[MaxIndexTuplesPerPage];
OffsetNumber toPlaceholder[MaxIndexTuplesPerPage];
OffsetNumber moveSrc[MaxIndexTuplesPerPage];
OffsetNumber moveDest[MaxIndexTuplesPerPage];
OffsetNumber chainSrc[MaxIndexTuplesPerPage];
OffsetNumber chainDest[MaxIndexTuplesPerPage];
OffsetNumber predecessor[MaxIndexTuplesPerPage + 1];
bool deletable[MaxIndexTuplesPerPage + 1];
int nDeletable;
OffsetNumber i,
max = PageGetMaxOffsetNumber(page);
memset(predecessor, 0, sizeof(predecessor));
memset(deletable, 0, sizeof(deletable));
nDeletable = 0;
/* Scan page, identify tuples to delete, accumulate stats */
for (i = FirstOffsetNumber; i <= max; i++)
{
SpGistLeafTuple lt;
lt = (SpGistLeafTuple) PageGetItem(page,
PageGetItemId(page, i));
if (lt->tupstate == SPGIST_LIVE)
{
Assert(ItemPointerIsValid(&lt->heapPtr));
if (bds->callback(&lt->heapPtr, bds->callback_state))
{
bds->stats->tuples_removed += 1;
deletable[i] = true;
nDeletable++;
}
else
{
bds->stats->num_index_tuples += 1;
}
/* Form predecessor map, too */
if (lt->nextOffset != InvalidOffsetNumber)
{
/* paranoia about corrupted chain links */
if (lt->nextOffset < FirstOffsetNumber ||
lt->nextOffset > max ||
predecessor[lt->nextOffset] != InvalidOffsetNumber)
elog(ERROR, "inconsistent tuple chain links in page %u of index \"%s\"",
BufferGetBlockNumber(buffer),
RelationGetRelationName(index));
predecessor[lt->nextOffset] = i;
}
}
else
{
Assert(lt->nextOffset == InvalidOffsetNumber);
}
}
if (nDeletable == 0)
return; /* nothing more to do */
/*----------
* Figure out exactly what we have to do. We do this separately from
* actually modifying the page, mainly so that we have a representation
* that can be dumped into WAL and then the replay code can do exactly
* the same thing. The output of this step consists of six arrays
* describing four kinds of operations, to be performed in this order:
*
* toDead[]: tuple numbers to be replaced with DEAD tuples
* toPlaceholder[]: tuple numbers to be replaced with PLACEHOLDER tuples
* moveSrc[]: tuple numbers that need to be relocated to another offset
* (replacing the tuple there) and then replaced with PLACEHOLDER tuples
* moveDest[]: new locations for moveSrc tuples
* chainSrc[]: tuple numbers whose chain links (nextOffset) need updates
* chainDest[]: new values of nextOffset for chainSrc members
*
* It's easiest to figure out what we have to do by processing tuple
* chains, so we iterate over all the tuples (not just the deletable
* ones!) to identify chain heads, then chase down each chain and make
* work item entries for deletable tuples within the chain.
*----------
*/
xlrec.nDead = xlrec.nPlaceholder = xlrec.nMove = xlrec.nChain = 0;
for (i = FirstOffsetNumber; i <= max; i++)
{
SpGistLeafTuple head;
bool interveningDeletable;
OffsetNumber prevLive;
OffsetNumber j;
head = (SpGistLeafTuple) PageGetItem(page,
PageGetItemId(page, i));
if (head->tupstate != SPGIST_LIVE)
continue; /* can't be a chain member */
if (predecessor[i] != 0)
continue; /* not a chain head */
/* initialize ... */
interveningDeletable = false;
prevLive = deletable[i] ? InvalidOffsetNumber : i;
/* scan down the chain ... */
j = head->nextOffset;
while (j != InvalidOffsetNumber)
{
SpGistLeafTuple lt;
lt = (SpGistLeafTuple) PageGetItem(page,
PageGetItemId(page, j));
if (lt->tupstate != SPGIST_LIVE)
{
/* all tuples in chain should be live */
elog(ERROR, "unexpected SPGiST tuple state: %d",
lt->tupstate);
}
if (deletable[j])
{
/* This tuple should be replaced by a placeholder */
toPlaceholder[xlrec.nPlaceholder] = j;
xlrec.nPlaceholder++;
/* previous live tuple's chain link will need an update */
interveningDeletable = true;
}
else if (prevLive == InvalidOffsetNumber)
{
/*
* This is the first live tuple in the chain. It has
* to move to the head position.
*/
moveSrc[xlrec.nMove] = j;
moveDest[xlrec.nMove] = i;
xlrec.nMove++;
/* Chain updates will be applied after the move */
prevLive = i;
interveningDeletable = false;
}
else
{
/*
* Second or later live tuple. Arrange to re-chain it to the
* previous live one, if there was a gap.
*/
if (interveningDeletable)
{
chainSrc[xlrec.nChain] = prevLive;
chainDest[xlrec.nChain] = j;
xlrec.nChain++;
}
prevLive = j;
interveningDeletable = false;
}
j = lt->nextOffset;
}
if (prevLive == InvalidOffsetNumber)
{
/* The chain is entirely removable, so we need a DEAD tuple */
toDead[xlrec.nDead] = i;
xlrec.nDead++;
}
else if (interveningDeletable)
{
/* One or more deletions at end of chain, so close it off */
chainSrc[xlrec.nChain] = prevLive;
chainDest[xlrec.nChain] = InvalidOffsetNumber;
xlrec.nChain++;
}
}
/* sanity check ... */
if (nDeletable != xlrec.nDead + xlrec.nPlaceholder + xlrec.nMove)
elog(ERROR, "inconsistent counts of deletable tuples");
/* Prepare WAL record */
xlrec.node = index->rd_node;
xlrec.blkno = BufferGetBlockNumber(buffer);
STORE_STATE(&bds->spgstate, xlrec.stateSrc);
ACCEPT_RDATA_DATA(&xlrec, sizeof(xlrec), 0);
/* sizeof(xlrec) should be a multiple of sizeof(OffsetNumber) */
ACCEPT_RDATA_DATA(toDead, sizeof(OffsetNumber) * xlrec.nDead, 1);
ACCEPT_RDATA_DATA(toPlaceholder, sizeof(OffsetNumber) * xlrec.nPlaceholder, 2);
ACCEPT_RDATA_DATA(moveSrc, sizeof(OffsetNumber) * xlrec.nMove, 3);
ACCEPT_RDATA_DATA(moveDest, sizeof(OffsetNumber) * xlrec.nMove, 4);
ACCEPT_RDATA_DATA(chainSrc, sizeof(OffsetNumber) * xlrec.nChain, 5);
ACCEPT_RDATA_DATA(chainDest, sizeof(OffsetNumber) * xlrec.nChain, 6);
ACCEPT_RDATA_BUFFER(buffer, 7);
/* Do the updates */
START_CRIT_SECTION();
spgPageIndexMultiDelete(&bds->spgstate, page,
toDead, xlrec.nDead,
SPGIST_DEAD, SPGIST_DEAD,
InvalidBlockNumber, InvalidOffsetNumber);
spgPageIndexMultiDelete(&bds->spgstate, page,
toPlaceholder, xlrec.nPlaceholder,
SPGIST_PLACEHOLDER, SPGIST_PLACEHOLDER,
InvalidBlockNumber, InvalidOffsetNumber);
/*
* We implement the move step by swapping the item pointers of the
* source and target tuples, then replacing the newly-source tuples
* with placeholders. This is perhaps unduly friendly with the page
* data representation, but it's fast and doesn't risk page overflow
* when a tuple to be relocated is large.
*/
for (i = 0; i < xlrec.nMove; i++)
{
ItemId idSrc = PageGetItemId(page, moveSrc[i]);
ItemId idDest = PageGetItemId(page, moveDest[i]);
ItemIdData tmp;
tmp = *idSrc;
*idSrc = *idDest;
*idDest = tmp;
}
spgPageIndexMultiDelete(&bds->spgstate, page,
moveSrc, xlrec.nMove,
SPGIST_PLACEHOLDER, SPGIST_PLACEHOLDER,
InvalidBlockNumber, InvalidOffsetNumber);
for (i = 0; i < xlrec.nChain; i++)
{
SpGistLeafTuple lt;
lt = (SpGistLeafTuple) PageGetItem(page,
PageGetItemId(page, chainSrc[i]));
Assert(lt->tupstate == SPGIST_LIVE);
lt->nextOffset = chainDest[i];
}
MarkBufferDirty(buffer);
if (RelationNeedsWAL(index))
{
XLogRecPtr recptr;
recptr = XLogInsert(RM_SPGIST_ID, XLOG_SPGIST_VACUUM_LEAF, rdata);
PageSetLSN(page, recptr);
PageSetTLI(page, ThisTimeLineID);
}
END_CRIT_SECTION();
}
/*
* Vacuum the root page when it is a leaf
*
* On the root, we just delete any dead leaf tuples; no fancy business
*/
static void
vacuumLeafRoot(spgBulkDeleteState *bds, Relation index, Buffer buffer)
{
Page page = BufferGetPage(buffer);
spgxlogVacuumRoot xlrec;
XLogRecData rdata[3];
OffsetNumber toDelete[MaxIndexTuplesPerPage];
OffsetNumber i,
max = PageGetMaxOffsetNumber(page);
xlrec.nDelete = 0;
/* Scan page, identify tuples to delete, accumulate stats */
for (i = FirstOffsetNumber; i <= max; i++)
{
SpGistLeafTuple lt;
lt = (SpGistLeafTuple) PageGetItem(page,
PageGetItemId(page, i));
if (lt->tupstate == SPGIST_LIVE)
{
Assert(ItemPointerIsValid(&lt->heapPtr));
if (bds->callback(&lt->heapPtr, bds->callback_state))
{
bds->stats->tuples_removed += 1;
toDelete[xlrec.nDelete] = i;
xlrec.nDelete++;
}
else
{
bds->stats->num_index_tuples += 1;
}
}
else
{
/* all tuples on root should be live */
elog(ERROR, "unexpected SPGiST tuple state: %d",
lt->tupstate);
}
}
if (xlrec.nDelete == 0)
return; /* nothing more to do */
/* Prepare WAL record */
xlrec.node = index->rd_node;
STORE_STATE(&bds->spgstate, xlrec.stateSrc);
ACCEPT_RDATA_DATA(&xlrec, sizeof(xlrec), 0);
/* sizeof(xlrec) should be a multiple of sizeof(OffsetNumber) */
ACCEPT_RDATA_DATA(toDelete, sizeof(OffsetNumber) * xlrec.nDelete, 1);
ACCEPT_RDATA_BUFFER(buffer, 2);
/* Do the update */
START_CRIT_SECTION();
/* The tuple numbers are in order, so we can use PageIndexMultiDelete */
PageIndexMultiDelete(page, toDelete, xlrec.nDelete);
MarkBufferDirty(buffer);
if (RelationNeedsWAL(index))
{
XLogRecPtr recptr;
recptr = XLogInsert(RM_SPGIST_ID, XLOG_SPGIST_VACUUM_ROOT, rdata);
PageSetLSN(page, recptr);
PageSetTLI(page, ThisTimeLineID);
}
END_CRIT_SECTION();
}
/*
* Clean up redirect and placeholder tuples on the given page
*
* Redirect tuples can be marked placeholder once they're old enough.
* Placeholder tuples can be removed if it won't change the offsets of
* non-placeholder ones.
*
* Unlike the routines above, this works on both leaf and inner pages.
*/
static void
vacuumRedirectAndPlaceholder(Relation index, Buffer buffer,
TransactionId OldestXmin)
{
Page page = BufferGetPage(buffer);
SpGistPageOpaque opaque = SpGistPageGetOpaque(page);
OffsetNumber i,
max = PageGetMaxOffsetNumber(page),
firstPlaceholder = InvalidOffsetNumber;
bool hasNonPlaceholder = false;
bool hasUpdate = false;
OffsetNumber itemToPlaceholder[MaxIndexTuplesPerPage];
OffsetNumber itemnos[MaxIndexTuplesPerPage];
spgxlogVacuumRedirect xlrec;
XLogRecData rdata[3];
xlrec.node = index->rd_node;
xlrec.blkno = BufferGetBlockNumber(buffer);
xlrec.nToPlaceholder = 0;
START_CRIT_SECTION();
/*
* Scan backwards to convert old redirection tuples to placeholder tuples,
* and identify location of last non-placeholder tuple while at it.
*/
for (i = max;
i >= FirstOffsetNumber &&
(opaque->nRedirection > 0 || !hasNonPlaceholder);
i--)
{
SpGistDeadTuple dt;
dt = (SpGistDeadTuple) PageGetItem(page, PageGetItemId(page, i));
if (dt->tupstate == SPGIST_REDIRECT &&
TransactionIdPrecedes(dt->xid, OldestXmin))
{
dt->tupstate = SPGIST_PLACEHOLDER;
Assert(opaque->nRedirection > 0);
opaque->nRedirection--;
opaque->nPlaceholder++;
ItemPointerSetInvalid(&dt->pointer);
itemToPlaceholder[xlrec.nToPlaceholder] = i;
xlrec.nToPlaceholder++;
hasUpdate = true;
}
if (dt->tupstate == SPGIST_PLACEHOLDER)
{
if (!hasNonPlaceholder)
firstPlaceholder = i;
}
else
{
hasNonPlaceholder = true;
}
}
/*
* Any placeholder tuples at the end of page can safely be removed. We
* can't remove ones before the last non-placeholder, though, because we
* can't alter the offset numbers of non-placeholder tuples.
*/
if (firstPlaceholder != InvalidOffsetNumber)
{
/*
* We do not store this array to rdata because it's easy to recreate.
*/
for (i = firstPlaceholder; i <= max; i++)
itemnos[i - firstPlaceholder] = i;
i = max - firstPlaceholder + 1;
Assert(opaque->nPlaceholder >= i);
opaque->nPlaceholder -= i;
/* The array is surely sorted, so can use PageIndexMultiDelete */
PageIndexMultiDelete(page, itemnos, i);
hasUpdate = true;
}
xlrec.firstPlaceholder = firstPlaceholder;
if (hasUpdate)
MarkBufferDirty(buffer);
if (hasUpdate && RelationNeedsWAL(index))
{
XLogRecPtr recptr;
ACCEPT_RDATA_DATA(&xlrec, sizeof(xlrec), 0);
ACCEPT_RDATA_DATA(itemToPlaceholder, sizeof(OffsetNumber) * xlrec.nToPlaceholder, 1);
ACCEPT_RDATA_BUFFER(buffer, 2);
recptr = XLogInsert(RM_SPGIST_ID, XLOG_SPGIST_VACUUM_REDIRECT, rdata);
PageSetLSN(page, recptr);
PageSetTLI(page, ThisTimeLineID);
}
END_CRIT_SECTION();
}
/*
* Process one page during a bulkdelete scan
*/
static void
spgvacuumpage(spgBulkDeleteState *bds, BlockNumber blkno)
{
Relation index = bds->info->index;
Buffer buffer;
Page page;
/* call vacuum_delay_point while not holding any buffer lock */
vacuum_delay_point();
buffer = ReadBufferExtended(index, MAIN_FORKNUM, blkno,
RBM_NORMAL, bds->info->strategy);
LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
page = (Page) BufferGetPage(buffer);
if (PageIsNew(page))
{
/*
* We found an all-zero page, which could happen if the database
* crashed just after extending the file. Initialize and recycle it.
*/
SpGistInitBuffer(buffer, 0);
SpGistPageSetDeleted(page);
/* We don't bother to WAL-log this action; easy to redo */
MarkBufferDirty(buffer);
}
else if (SpGistPageIsDeleted(page))
{
/* nothing to do */
}
else if (SpGistPageIsLeaf(page))
{
if (blkno == SPGIST_HEAD_BLKNO)
{
vacuumLeafRoot(bds, index, buffer);
/* no need for vacuumRedirectAndPlaceholder */
}
else
{
vacuumLeafPage(bds, index, buffer);
vacuumRedirectAndPlaceholder(index, buffer, bds->OldestXmin);
}
}
else
{
/* inner page */
vacuumRedirectAndPlaceholder(index, buffer, bds->OldestXmin);
}
/*
* The root page must never be deleted, nor marked as available in FSM,
* because we don't want it ever returned by a search for a place to
* put a new tuple. Otherwise, check for empty/deletable page, and
* make sure FSM knows about it.
*/
if (blkno != SPGIST_HEAD_BLKNO)
{
/* If page is now empty, mark it deleted */
if (PageIsEmpty(page) && !SpGistPageIsDeleted(page))
{
SpGistPageSetDeleted(page);
/* We don't bother to WAL-log this action; easy to redo */
MarkBufferDirty(buffer);
}
if (SpGistPageIsDeleted(page))
{
RecordFreeIndexPage(index, blkno);
bds->stats->pages_deleted++;
}
else
bds->lastFilledBlock = blkno;
}
SpGistSetLastUsedPage(index, buffer);
UnlockReleaseBuffer(buffer);
}
/*
* Perform a bulkdelete scan
*/
static void
spgvacuumscan(spgBulkDeleteState *bds)
{
Relation index = bds->info->index;
bool needLock;
BlockNumber num_pages,
blkno;
/* Finish setting up spgBulkDeleteState */
initSpGistState(&bds->spgstate, index);
bds->OldestXmin = GetOldestXmin(true, false);
bds->lastFilledBlock = SPGIST_HEAD_BLKNO;
/*
* Reset counts that will be incremented during the scan; needed in case
* of multiple scans during a single VACUUM command
*/
bds->stats->estimated_count = false;
bds->stats->num_index_tuples = 0;
bds->stats->pages_deleted = 0;
/* We can skip locking for new or temp relations */
needLock = !RELATION_IS_LOCAL(index);
/*
* The outer loop iterates over all index pages except the metapage, in
* physical order (we hope the kernel will cooperate in providing
* read-ahead for speed). It is critical that we visit all leaf pages,
* including ones added after we start the scan, else we might fail to
* delete some deletable tuples. See more extensive comments about
* this in btvacuumscan().
*/
blkno = SPGIST_HEAD_BLKNO;
for (;;)
{
/* Get the current relation length */
if (needLock)
LockRelationForExtension(index, ExclusiveLock);
num_pages = RelationGetNumberOfBlocks(index);
if (needLock)
UnlockRelationForExtension(index, ExclusiveLock);
/* Quit if we've scanned the whole relation */
if (blkno >= num_pages)
break;
/* Iterate over pages, then loop back to recheck length */
for (; blkno < num_pages; blkno++)
{
spgvacuumpage(bds, blkno);
}
}
/* Propagate local lastUsedPage cache to metablock */
SpGistUpdateMetaPage(index);
/*
* Truncate index if possible
*
* XXX disabled because it's unsafe due to possible concurrent inserts.
* We'd have to rescan the pages to make sure they're still empty, and it
* doesn't seem worth it. Note that btree doesn't do this either.
*/
#ifdef NOT_USED
if (num_pages > bds->lastFilledBlock + 1)
{
BlockNumber lastBlock = num_pages - 1;
num_pages = bds->lastFilledBlock + 1;
RelationTruncate(index, num_pages);
bds->stats->pages_removed += lastBlock - bds->lastFilledBlock;
bds->stats->pages_deleted -= lastBlock - bds->lastFilledBlock;
}
#endif
/* Report final stats */
bds->stats->num_pages = num_pages;
bds->stats->pages_free = bds->stats->pages_deleted;
}
/*
* Bulk deletion of all index entries pointing to a set of heap tuples.
* The set of target tuples is specified via a callback routine that tells
* whether any given heap tuple (identified by ItemPointer) is being deleted.
*
* Result: a palloc'd struct containing statistical info for VACUUM displays.
*/
Datum
spgbulkdelete(PG_FUNCTION_ARGS)
{
IndexVacuumInfo *info = (IndexVacuumInfo *) PG_GETARG_POINTER(0);
IndexBulkDeleteResult *stats = (IndexBulkDeleteResult *) PG_GETARG_POINTER(1);
IndexBulkDeleteCallback callback = (IndexBulkDeleteCallback) PG_GETARG_POINTER(2);
void *callback_state = (void *) PG_GETARG_POINTER(3);
spgBulkDeleteState bds;
/* allocate stats if first time through, else re-use existing struct */
if (stats == NULL)
stats = (IndexBulkDeleteResult *) palloc0(sizeof(IndexBulkDeleteResult));
bds.info = info;
bds.stats = stats;
bds.callback = callback;
bds.callback_state = callback_state;
spgvacuumscan(&bds);
PG_RETURN_POINTER(stats);
}
/* Dummy callback to delete no tuples during spgvacuumcleanup */
static bool
dummy_callback(ItemPointer itemptr, void *state)
{
return false;
}
/*
* Post-VACUUM cleanup.
*
* Result: a palloc'd struct containing statistical info for VACUUM displays.
*/
Datum
spgvacuumcleanup(PG_FUNCTION_ARGS)
{
IndexVacuumInfo *info = (IndexVacuumInfo *) PG_GETARG_POINTER(0);
IndexBulkDeleteResult *stats = (IndexBulkDeleteResult *) PG_GETARG_POINTER(1);
Relation index = info->index;
spgBulkDeleteState bds;
/* No-op in ANALYZE ONLY mode */
if (info->analyze_only)
PG_RETURN_POINTER(stats);
/*
* We don't need to scan the index if there was a preceding bulkdelete
* pass. Otherwise, make a pass that won't delete any live tuples, but
* might still accomplish useful stuff with redirect/placeholder cleanup,
* and in any case will provide stats.
*/
if (stats == NULL)
{
stats = (IndexBulkDeleteResult *) palloc0(sizeof(IndexBulkDeleteResult));
bds.info = info;
bds.stats = stats;
bds.callback = dummy_callback;
bds.callback_state = NULL;
spgvacuumscan(&bds);
}
/* Finally, vacuum the FSM */
IndexFreeSpaceMapVacuum(index);
/*
* It's quite possible for us to be fooled by concurrent page splits into
* double-counting some index tuples, so disbelieve any total that exceeds
* the underlying heap's count ... if we know that accurately. Otherwise
* this might just make matters worse.
*/
if (!info->estimated_count)
{
if (stats->num_index_tuples > info->num_heap_tuples)
stats->num_index_tuples = info->num_heap_tuples;
}
PG_RETURN_POINTER(stats);
}

File diff suppressed because it is too large Load Diff