mirror of
https://github.com/postgres/postgres.git
synced 2025-07-03 20:02:46 +03:00
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several new abstractions are needed. Specifically: 1) Heap scans need to be generalized into table scans. Do this by introducing TableScanDesc, which will be the "base class" for individual AMs. This contains the AM independent fields from HeapScanDesc. The previous heap_{beginscan,rescan,endscan} et al. have been replaced with a table_ version. There's no direct replacement for heap_getnext(), as that returned a HeapTuple, which is undesirable for a other AMs. Instead there's table_scan_getnextslot(). But note that heap_getnext() lives on, it's still used widely to access catalog tables. This is achieved by new scan_begin, scan_end, scan_rescan, scan_getnextslot callbacks. 2) The portion of parallel scans that's shared between backends need to be able to do so without the user doing per-AM work. To achieve that new parallelscan_{estimate, initialize, reinitialize} callbacks are introduced, which operate on a new ParallelTableScanDesc, which again can be subclassed by AMs. As it is likely that several AMs are going to be block oriented, block oriented callbacks that can be shared between such AMs are provided and used by heap. table_block_parallelscan_{estimate, intiialize, reinitialize} as callbacks, and table_block_parallelscan_{nextpage, init} for use in AMs. These operate on a ParallelBlockTableScanDesc. 3) Index scans need to be able to access tables to return a tuple, and there needs to be state across individual accesses to the heap to store state like buffers. That's now handled by introducing a sort-of-scan IndexFetchTable, which again is intended to be subclassed by individual AMs (for heap IndexFetchHeap). The relevant callbacks for an AM are index_fetch_{end, begin, reset} to create the necessary state, and index_fetch_tuple to retrieve an indexed tuple. Note that index_fetch_tuple implementations need to be smarter than just blindly fetching the tuples for AMs that have optimizations similar to heap's HOT - the currently alive tuple in the update chain needs to be fetched if appropriate. Similar to table_scan_getnextslot(), it's undesirable to continue to return HeapTuples. Thus index_fetch_heap (might want to rename that later) now accepts a slot as an argument. Core code doesn't have a lot of call sites performing index scans without going through the systable_* API (in contrast to loads of heap_getnext calls and working directly with HeapTuples). Index scans now store the result of a search in IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the target is not generally a HeapTuple anymore that seems cleaner. To be able to sensible adapt code to use the above, two further callbacks have been introduced: a) slot_callbacks returns a TupleTableSlotOps* suitable for creating slots capable of holding a tuple of the AMs type. table_slot_callbacks() and table_slot_create() are based upon that, but have additional logic to deal with views, foreign tables, etc. While this change could have been done separately, nearly all the call sites that needed to be adapted for the rest of this commit also would have been needed to be adapted for table_slot_callbacks(), making separation not worthwhile. b) tuple_satisfies_snapshot checks whether the tuple in a slot is currently visible according to a snapshot. That's required as a few places now don't have a buffer + HeapTuple around, but a slot (which in heap's case internally has that information). Additionally a few infrastructure changes were needed: I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now internally uses a slot to keep track of tuples. While systable_getnext() still returns HeapTuples, and will so for the foreseeable future, the index API (see 1) above) now only deals with slots. The remainder, and largest part, of this commit is then adjusting all scans in postgres to use the new APIs. Author: Andres Freund, Haribabu Kommi, Alvaro Herrera Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
This commit is contained in:
@ -204,7 +204,7 @@ execCurrentOf(CurrentOfExpr *cexpr,
|
||||
*/
|
||||
IndexScanDesc scan = ((IndexOnlyScanState *) scanstate)->ioss_ScanDesc;
|
||||
|
||||
*current_tid = scan->xs_ctup.t_self;
|
||||
*current_tid = scan->xs_heaptid;
|
||||
}
|
||||
else
|
||||
{
|
||||
|
@ -108,6 +108,7 @@
|
||||
|
||||
#include "access/genam.h"
|
||||
#include "access/relscan.h"
|
||||
#include "access/tableam.h"
|
||||
#include "access/xact.h"
|
||||
#include "catalog/index.h"
|
||||
#include "executor/executor.h"
|
||||
@ -651,7 +652,6 @@ check_exclusion_or_unique_constraint(Relation heap, Relation index,
|
||||
Oid *index_collations = index->rd_indcollation;
|
||||
int indnkeyatts = IndexRelationGetNumberOfKeyAttributes(index);
|
||||
IndexScanDesc index_scan;
|
||||
HeapTuple tup;
|
||||
ScanKeyData scankeys[INDEX_MAX_KEYS];
|
||||
SnapshotData DirtySnapshot;
|
||||
int i;
|
||||
@ -707,8 +707,7 @@ check_exclusion_or_unique_constraint(Relation heap, Relation index,
|
||||
* to this slot. Be sure to save and restore caller's value for
|
||||
* scantuple.
|
||||
*/
|
||||
existing_slot = MakeSingleTupleTableSlot(RelationGetDescr(heap),
|
||||
&TTSOpsHeapTuple);
|
||||
existing_slot = table_slot_create(heap, NULL);
|
||||
|
||||
econtext = GetPerTupleExprContext(estate);
|
||||
save_scantuple = econtext->ecxt_scantuple;
|
||||
@ -724,11 +723,9 @@ retry:
|
||||
index_scan = index_beginscan(heap, index, &DirtySnapshot, indnkeyatts, 0);
|
||||
index_rescan(index_scan, scankeys, indnkeyatts, NULL, 0);
|
||||
|
||||
while ((tup = index_getnext(index_scan,
|
||||
ForwardScanDirection)) != NULL)
|
||||
while (index_getnext_slot(index_scan, ForwardScanDirection, existing_slot))
|
||||
{
|
||||
TransactionId xwait;
|
||||
ItemPointerData ctid_wait;
|
||||
XLTW_Oper reason_wait;
|
||||
Datum existing_values[INDEX_MAX_KEYS];
|
||||
bool existing_isnull[INDEX_MAX_KEYS];
|
||||
@ -739,7 +736,7 @@ retry:
|
||||
* Ignore the entry for the tuple we're trying to check.
|
||||
*/
|
||||
if (ItemPointerIsValid(tupleid) &&
|
||||
ItemPointerEquals(tupleid, &tup->t_self))
|
||||
ItemPointerEquals(tupleid, &existing_slot->tts_tid))
|
||||
{
|
||||
if (found_self) /* should not happen */
|
||||
elog(ERROR, "found self tuple multiple times in index \"%s\"",
|
||||
@ -752,7 +749,6 @@ retry:
|
||||
* Extract the index column values and isnull flags from the existing
|
||||
* tuple.
|
||||
*/
|
||||
ExecStoreHeapTuple(tup, existing_slot, false);
|
||||
FormIndexDatum(indexInfo, existing_slot, estate,
|
||||
existing_values, existing_isnull);
|
||||
|
||||
@ -787,7 +783,6 @@ retry:
|
||||
DirtySnapshot.speculativeToken &&
|
||||
TransactionIdPrecedes(GetCurrentTransactionId(), xwait))))
|
||||
{
|
||||
ctid_wait = tup->t_data->t_ctid;
|
||||
reason_wait = indexInfo->ii_ExclusionOps ?
|
||||
XLTW_RecheckExclusionConstr : XLTW_InsertIndex;
|
||||
index_endscan(index_scan);
|
||||
@ -795,7 +790,8 @@ retry:
|
||||
SpeculativeInsertionWait(DirtySnapshot.xmin,
|
||||
DirtySnapshot.speculativeToken);
|
||||
else
|
||||
XactLockTableWait(xwait, heap, &ctid_wait, reason_wait);
|
||||
XactLockTableWait(xwait, heap,
|
||||
&existing_slot->tts_tid, reason_wait);
|
||||
goto retry;
|
||||
}
|
||||
|
||||
@ -807,7 +803,7 @@ retry:
|
||||
{
|
||||
conflict = true;
|
||||
if (conflictTid)
|
||||
*conflictTid = tup->t_self;
|
||||
*conflictTid = existing_slot->tts_tid;
|
||||
break;
|
||||
}
|
||||
|
||||
|
@ -40,6 +40,7 @@
|
||||
#include "access/heapam.h"
|
||||
#include "access/htup_details.h"
|
||||
#include "access/sysattr.h"
|
||||
#include "access/tableam.h"
|
||||
#include "access/transam.h"
|
||||
#include "access/xact.h"
|
||||
#include "catalog/namespace.h"
|
||||
@ -2802,9 +2803,8 @@ EvalPlanQualSlot(EPQState *epqstate,
|
||||
oldcontext = MemoryContextSwitchTo(epqstate->estate->es_query_cxt);
|
||||
|
||||
if (relation)
|
||||
*slot = ExecAllocTableSlot(&epqstate->estate->es_tupleTable,
|
||||
RelationGetDescr(relation),
|
||||
&TTSOpsBufferHeapTuple);
|
||||
*slot = table_slot_create(relation,
|
||||
&epqstate->estate->es_tupleTable);
|
||||
else
|
||||
*slot = ExecAllocTableSlot(&epqstate->estate->es_tupleTable,
|
||||
epqstate->origslot->tts_tupleDescriptor,
|
||||
|
@ -14,6 +14,7 @@
|
||||
#include "postgres.h"
|
||||
|
||||
#include "access/table.h"
|
||||
#include "access/tableam.h"
|
||||
#include "catalog/partition.h"
|
||||
#include "catalog/pg_inherits.h"
|
||||
#include "catalog/pg_type.h"
|
||||
@ -727,10 +728,8 @@ ExecInitPartitionInfo(ModifyTableState *mtstate, EState *estate,
|
||||
if (node->onConflictAction == ONCONFLICT_UPDATE)
|
||||
{
|
||||
TupleConversionMap *map;
|
||||
TupleDesc leaf_desc;
|
||||
|
||||
map = leaf_part_rri->ri_PartitionInfo->pi_RootToPartitionMap;
|
||||
leaf_desc = RelationGetDescr(leaf_part_rri->ri_RelationDesc);
|
||||
|
||||
Assert(node->onConflictSet != NIL);
|
||||
Assert(rootResultRelInfo->ri_onConflict != NULL);
|
||||
@ -743,9 +742,8 @@ ExecInitPartitionInfo(ModifyTableState *mtstate, EState *estate,
|
||||
* descriptors match.
|
||||
*/
|
||||
leaf_part_rri->ri_onConflict->oc_Existing =
|
||||
ExecInitExtraTupleSlot(mtstate->ps.state,
|
||||
leaf_desc,
|
||||
&TTSOpsBufferHeapTuple);
|
||||
table_slot_create(leaf_part_rri->ri_RelationDesc,
|
||||
&mtstate->ps.state->es_tupleTable);
|
||||
|
||||
/*
|
||||
* If the partition's tuple descriptor matches exactly the root
|
||||
@ -920,8 +918,7 @@ ExecInitRoutingInfo(ModifyTableState *mtstate,
|
||||
* end of the command.
|
||||
*/
|
||||
partrouteinfo->pi_PartitionTupleSlot =
|
||||
ExecInitExtraTupleSlot(estate, RelationGetDescr(partrel),
|
||||
&TTSOpsHeapTuple);
|
||||
table_slot_create(partrel, &estate->es_tupleTable);
|
||||
}
|
||||
else
|
||||
partrouteinfo->pi_PartitionTupleSlot = NULL;
|
||||
|
@ -17,6 +17,7 @@
|
||||
#include "access/genam.h"
|
||||
#include "access/heapam.h"
|
||||
#include "access/relscan.h"
|
||||
#include "access/tableam.h"
|
||||
#include "access/transam.h"
|
||||
#include "access/xact.h"
|
||||
#include "commands/trigger.h"
|
||||
@ -118,7 +119,6 @@ RelationFindReplTupleByIndex(Relation rel, Oid idxoid,
|
||||
TupleTableSlot *searchslot,
|
||||
TupleTableSlot *outslot)
|
||||
{
|
||||
HeapTuple scantuple;
|
||||
ScanKeyData skey[INDEX_MAX_KEYS];
|
||||
IndexScanDesc scan;
|
||||
SnapshotData snap;
|
||||
@ -144,10 +144,9 @@ retry:
|
||||
index_rescan(scan, skey, IndexRelationGetNumberOfKeyAttributes(idxrel), NULL, 0);
|
||||
|
||||
/* Try to find the tuple */
|
||||
if ((scantuple = index_getnext(scan, ForwardScanDirection)) != NULL)
|
||||
if (index_getnext_slot(scan, ForwardScanDirection, outslot))
|
||||
{
|
||||
found = true;
|
||||
ExecStoreHeapTuple(scantuple, outslot, false);
|
||||
ExecMaterializeSlot(outslot);
|
||||
|
||||
xwait = TransactionIdIsValid(snap.xmin) ?
|
||||
@ -222,19 +221,21 @@ retry:
|
||||
}
|
||||
|
||||
/*
|
||||
* Compare the tuple and slot and check if they have equal values.
|
||||
* Compare the tuples in the slots by checking if they have equal values.
|
||||
*/
|
||||
static bool
|
||||
tuple_equals_slot(TupleDesc desc, HeapTuple tup, TupleTableSlot *slot)
|
||||
tuples_equal(TupleTableSlot *slot1, TupleTableSlot *slot2)
|
||||
{
|
||||
Datum values[MaxTupleAttributeNumber];
|
||||
bool isnull[MaxTupleAttributeNumber];
|
||||
int attrnum;
|
||||
int attrnum;
|
||||
|
||||
heap_deform_tuple(tup, desc, values, isnull);
|
||||
Assert(slot1->tts_tupleDescriptor->natts ==
|
||||
slot2->tts_tupleDescriptor->natts);
|
||||
|
||||
slot_getallattrs(slot1);
|
||||
slot_getallattrs(slot2);
|
||||
|
||||
/* Check equality of the attributes. */
|
||||
for (attrnum = 0; attrnum < desc->natts; attrnum++)
|
||||
for (attrnum = 0; attrnum < slot1->tts_tupleDescriptor->natts; attrnum++)
|
||||
{
|
||||
Form_pg_attribute att;
|
||||
TypeCacheEntry *typentry;
|
||||
@ -243,16 +244,16 @@ tuple_equals_slot(TupleDesc desc, HeapTuple tup, TupleTableSlot *slot)
|
||||
* If one value is NULL and other is not, then they are certainly not
|
||||
* equal
|
||||
*/
|
||||
if (isnull[attrnum] != slot->tts_isnull[attrnum])
|
||||
if (slot1->tts_isnull[attrnum] != slot2->tts_isnull[attrnum])
|
||||
return false;
|
||||
|
||||
/*
|
||||
* If both are NULL, they can be considered equal.
|
||||
*/
|
||||
if (isnull[attrnum])
|
||||
if (slot1->tts_isnull[attrnum] || slot2->tts_isnull[attrnum])
|
||||
continue;
|
||||
|
||||
att = TupleDescAttr(desc, attrnum);
|
||||
att = TupleDescAttr(slot1->tts_tupleDescriptor, attrnum);
|
||||
|
||||
typentry = lookup_type_cache(att->atttypid, TYPECACHE_EQ_OPR_FINFO);
|
||||
if (!OidIsValid(typentry->eq_opr_finfo.fn_oid))
|
||||
@ -262,8 +263,8 @@ tuple_equals_slot(TupleDesc desc, HeapTuple tup, TupleTableSlot *slot)
|
||||
format_type_be(att->atttypid))));
|
||||
|
||||
if (!DatumGetBool(FunctionCall2(&typentry->eq_opr_finfo,
|
||||
values[attrnum],
|
||||
slot->tts_values[attrnum])))
|
||||
slot1->tts_values[attrnum],
|
||||
slot2->tts_values[attrnum])))
|
||||
return false;
|
||||
}
|
||||
|
||||
@ -284,33 +285,33 @@ bool
|
||||
RelationFindReplTupleSeq(Relation rel, LockTupleMode lockmode,
|
||||
TupleTableSlot *searchslot, TupleTableSlot *outslot)
|
||||
{
|
||||
HeapTuple scantuple;
|
||||
HeapScanDesc scan;
|
||||
TupleTableSlot *scanslot;
|
||||
TableScanDesc scan;
|
||||
SnapshotData snap;
|
||||
TransactionId xwait;
|
||||
bool found;
|
||||
TupleDesc desc = RelationGetDescr(rel);
|
||||
TupleDesc desc PG_USED_FOR_ASSERTS_ONLY = RelationGetDescr(rel);
|
||||
|
||||
Assert(equalTupleDescs(desc, outslot->tts_tupleDescriptor));
|
||||
|
||||
/* Start a heap scan. */
|
||||
InitDirtySnapshot(snap);
|
||||
scan = heap_beginscan(rel, &snap, 0, NULL);
|
||||
scan = table_beginscan(rel, &snap, 0, NULL);
|
||||
scanslot = table_slot_create(rel, NULL);
|
||||
|
||||
retry:
|
||||
found = false;
|
||||
|
||||
heap_rescan(scan, NULL);
|
||||
table_rescan(scan, NULL);
|
||||
|
||||
/* Try to find the tuple */
|
||||
while ((scantuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
|
||||
while (table_scan_getnextslot(scan, ForwardScanDirection, scanslot))
|
||||
{
|
||||
if (!tuple_equals_slot(desc, scantuple, searchslot))
|
||||
if (!tuples_equal(scanslot, searchslot))
|
||||
continue;
|
||||
|
||||
found = true;
|
||||
ExecStoreHeapTuple(scantuple, outslot, false);
|
||||
ExecMaterializeSlot(outslot);
|
||||
ExecCopySlot(outslot, scanslot);
|
||||
|
||||
xwait = TransactionIdIsValid(snap.xmin) ?
|
||||
snap.xmin : snap.xmax;
|
||||
@ -375,7 +376,8 @@ retry:
|
||||
}
|
||||
}
|
||||
|
||||
heap_endscan(scan);
|
||||
table_endscan(scan);
|
||||
ExecDropSingleTupleTableSlot(scanslot);
|
||||
|
||||
return found;
|
||||
}
|
||||
@ -458,11 +460,9 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
|
||||
ResultRelInfo *resultRelInfo = estate->es_result_relation_info;
|
||||
Relation rel = resultRelInfo->ri_RelationDesc;
|
||||
HeapTupleTableSlot *hsearchslot = (HeapTupleTableSlot *)searchslot;
|
||||
HeapTupleTableSlot *hslot = (HeapTupleTableSlot *)slot;
|
||||
|
||||
/* We expect both searchslot and the slot to contain a heap tuple. */
|
||||
/* We expect the searchslot to contain a heap tuple. */
|
||||
Assert(TTS_IS_HEAPTUPLE(searchslot) || TTS_IS_BUFFERTUPLE(searchslot));
|
||||
Assert(TTS_IS_HEAPTUPLE(slot) || TTS_IS_BUFFERTUPLE(slot));
|
||||
|
||||
/* For now we support only tables. */
|
||||
Assert(rel->rd_rel->relkind == RELKIND_RELATION);
|
||||
@ -493,11 +493,11 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
|
||||
tuple = ExecFetchSlotHeapTuple(slot, true, NULL);
|
||||
|
||||
/* OK, update the tuple and index entries for it */
|
||||
simple_heap_update(rel, &hsearchslot->tuple->t_self, hslot->tuple);
|
||||
ItemPointerCopy(&hslot->tuple->t_self, &slot->tts_tid);
|
||||
simple_heap_update(rel, &hsearchslot->tuple->t_self, tuple);
|
||||
ItemPointerCopy(&tuple->t_self, &slot->tts_tid);
|
||||
|
||||
if (resultRelInfo->ri_NumIndices > 0 &&
|
||||
!HeapTupleIsHeapOnly(hslot->tuple))
|
||||
!HeapTupleIsHeapOnly(tuple))
|
||||
recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
|
||||
estate, false, NULL,
|
||||
NIL);
|
||||
|
@ -48,6 +48,7 @@
|
||||
#include "access/parallel.h"
|
||||
#include "access/relscan.h"
|
||||
#include "access/table.h"
|
||||
#include "access/tableam.h"
|
||||
#include "access/transam.h"
|
||||
#include "executor/executor.h"
|
||||
#include "jit/jit.h"
|
||||
@ -1121,7 +1122,7 @@ ExecGetTriggerOldSlot(EState *estate, ResultRelInfo *relInfo)
|
||||
relInfo->ri_TrigOldSlot =
|
||||
ExecInitExtraTupleSlot(estate,
|
||||
RelationGetDescr(rel),
|
||||
&TTSOpsBufferHeapTuple);
|
||||
table_slot_callbacks(rel));
|
||||
|
||||
MemoryContextSwitchTo(oldcontext);
|
||||
}
|
||||
@ -1143,7 +1144,7 @@ ExecGetTriggerNewSlot(EState *estate, ResultRelInfo *relInfo)
|
||||
relInfo->ri_TrigNewSlot =
|
||||
ExecInitExtraTupleSlot(estate,
|
||||
RelationGetDescr(rel),
|
||||
&TTSOpsBufferHeapTuple);
|
||||
table_slot_callbacks(rel));
|
||||
|
||||
MemoryContextSwitchTo(oldcontext);
|
||||
}
|
||||
@ -1165,7 +1166,7 @@ ExecGetReturningSlot(EState *estate, ResultRelInfo *relInfo)
|
||||
relInfo->ri_ReturningSlot =
|
||||
ExecInitExtraTupleSlot(estate,
|
||||
RelationGetDescr(rel),
|
||||
&TTSOpsBufferHeapTuple);
|
||||
table_slot_callbacks(rel));
|
||||
|
||||
MemoryContextSwitchTo(oldcontext);
|
||||
}
|
||||
|
@ -39,6 +39,7 @@
|
||||
|
||||
#include "access/heapam.h"
|
||||
#include "access/relscan.h"
|
||||
#include "access/tableam.h"
|
||||
#include "access/transam.h"
|
||||
#include "access/visibilitymap.h"
|
||||
#include "executor/execdebug.h"
|
||||
@ -61,7 +62,7 @@ static inline void BitmapAdjustPrefetchIterator(BitmapHeapScanState *node,
|
||||
TBMIterateResult *tbmres);
|
||||
static inline void BitmapAdjustPrefetchTarget(BitmapHeapScanState *node);
|
||||
static inline void BitmapPrefetch(BitmapHeapScanState *node,
|
||||
HeapScanDesc scan);
|
||||
TableScanDesc scan);
|
||||
static bool BitmapShouldInitializeSharedState(
|
||||
ParallelBitmapHeapState *pstate);
|
||||
|
||||
@ -76,7 +77,8 @@ static TupleTableSlot *
|
||||
BitmapHeapNext(BitmapHeapScanState *node)
|
||||
{
|
||||
ExprContext *econtext;
|
||||
HeapScanDesc scan;
|
||||
TableScanDesc scan;
|
||||
HeapScanDesc hscan;
|
||||
TIDBitmap *tbm;
|
||||
TBMIterator *tbmiterator = NULL;
|
||||
TBMSharedIterator *shared_tbmiterator = NULL;
|
||||
@ -92,6 +94,7 @@ BitmapHeapNext(BitmapHeapScanState *node)
|
||||
econtext = node->ss.ps.ps_ExprContext;
|
||||
slot = node->ss.ss_ScanTupleSlot;
|
||||
scan = node->ss.ss_currentScanDesc;
|
||||
hscan = (HeapScanDesc) scan;
|
||||
tbm = node->tbm;
|
||||
if (pstate == NULL)
|
||||
tbmiterator = node->tbmiterator;
|
||||
@ -219,7 +222,7 @@ BitmapHeapNext(BitmapHeapScanState *node)
|
||||
* least AccessShareLock on the table before performing any of the
|
||||
* indexscans, but let's be safe.)
|
||||
*/
|
||||
if (tbmres->blockno >= scan->rs_nblocks)
|
||||
if (tbmres->blockno >= hscan->rs_nblocks)
|
||||
{
|
||||
node->tbmres = tbmres = NULL;
|
||||
continue;
|
||||
@ -242,14 +245,14 @@ BitmapHeapNext(BitmapHeapScanState *node)
|
||||
* The number of tuples on this page is put into
|
||||
* scan->rs_ntuples; note we don't fill scan->rs_vistuples.
|
||||
*/
|
||||
scan->rs_ntuples = tbmres->ntuples;
|
||||
hscan->rs_ntuples = tbmres->ntuples;
|
||||
}
|
||||
else
|
||||
{
|
||||
/*
|
||||
* Fetch the current heap page and identify candidate tuples.
|
||||
*/
|
||||
bitgetpage(scan, tbmres);
|
||||
bitgetpage(hscan, tbmres);
|
||||
}
|
||||
|
||||
if (tbmres->ntuples >= 0)
|
||||
@ -260,7 +263,7 @@ BitmapHeapNext(BitmapHeapScanState *node)
|
||||
/*
|
||||
* Set rs_cindex to first slot to examine
|
||||
*/
|
||||
scan->rs_cindex = 0;
|
||||
hscan->rs_cindex = 0;
|
||||
|
||||
/* Adjust the prefetch target */
|
||||
BitmapAdjustPrefetchTarget(node);
|
||||
@ -270,7 +273,7 @@ BitmapHeapNext(BitmapHeapScanState *node)
|
||||
/*
|
||||
* Continuing in previously obtained page; advance rs_cindex
|
||||
*/
|
||||
scan->rs_cindex++;
|
||||
hscan->rs_cindex++;
|
||||
|
||||
#ifdef USE_PREFETCH
|
||||
|
||||
@ -297,7 +300,7 @@ BitmapHeapNext(BitmapHeapScanState *node)
|
||||
/*
|
||||
* Out of range? If so, nothing more to look at on this page
|
||||
*/
|
||||
if (scan->rs_cindex < 0 || scan->rs_cindex >= scan->rs_ntuples)
|
||||
if (hscan->rs_cindex < 0 || hscan->rs_cindex >= hscan->rs_ntuples)
|
||||
{
|
||||
node->tbmres = tbmres = NULL;
|
||||
continue;
|
||||
@ -324,15 +327,15 @@ BitmapHeapNext(BitmapHeapScanState *node)
|
||||
/*
|
||||
* Okay to fetch the tuple.
|
||||
*/
|
||||
targoffset = scan->rs_vistuples[scan->rs_cindex];
|
||||
dp = (Page) BufferGetPage(scan->rs_cbuf);
|
||||
targoffset = hscan->rs_vistuples[hscan->rs_cindex];
|
||||
dp = (Page) BufferGetPage(hscan->rs_cbuf);
|
||||
lp = PageGetItemId(dp, targoffset);
|
||||
Assert(ItemIdIsNormal(lp));
|
||||
|
||||
scan->rs_ctup.t_data = (HeapTupleHeader) PageGetItem((Page) dp, lp);
|
||||
scan->rs_ctup.t_len = ItemIdGetLength(lp);
|
||||
scan->rs_ctup.t_tableOid = scan->rs_rd->rd_id;
|
||||
ItemPointerSet(&scan->rs_ctup.t_self, tbmres->blockno, targoffset);
|
||||
hscan->rs_ctup.t_data = (HeapTupleHeader) PageGetItem((Page) dp, lp);
|
||||
hscan->rs_ctup.t_len = ItemIdGetLength(lp);
|
||||
hscan->rs_ctup.t_tableOid = scan->rs_rd->rd_id;
|
||||
ItemPointerSet(&hscan->rs_ctup.t_self, tbmres->blockno, targoffset);
|
||||
|
||||
pgstat_count_heap_fetch(scan->rs_rd);
|
||||
|
||||
@ -340,9 +343,9 @@ BitmapHeapNext(BitmapHeapScanState *node)
|
||||
* Set up the result slot to point to this tuple. Note that the
|
||||
* slot acquires a pin on the buffer.
|
||||
*/
|
||||
ExecStoreBufferHeapTuple(&scan->rs_ctup,
|
||||
ExecStoreBufferHeapTuple(&hscan->rs_ctup,
|
||||
slot,
|
||||
scan->rs_cbuf);
|
||||
hscan->rs_cbuf);
|
||||
|
||||
/*
|
||||
* If we are using lossy info, we have to recheck the qual
|
||||
@ -392,17 +395,17 @@ bitgetpage(HeapScanDesc scan, TBMIterateResult *tbmres)
|
||||
Assert(page < scan->rs_nblocks);
|
||||
|
||||
scan->rs_cbuf = ReleaseAndReadBuffer(scan->rs_cbuf,
|
||||
scan->rs_rd,
|
||||
scan->rs_base.rs_rd,
|
||||
page);
|
||||
buffer = scan->rs_cbuf;
|
||||
snapshot = scan->rs_snapshot;
|
||||
snapshot = scan->rs_base.rs_snapshot;
|
||||
|
||||
ntup = 0;
|
||||
|
||||
/*
|
||||
* Prune and repair fragmentation for the whole page, if possible.
|
||||
*/
|
||||
heap_page_prune_opt(scan->rs_rd, buffer);
|
||||
heap_page_prune_opt(scan->rs_base.rs_rd, buffer);
|
||||
|
||||
/*
|
||||
* We must hold share lock on the buffer content while examining tuple
|
||||
@ -430,8 +433,8 @@ bitgetpage(HeapScanDesc scan, TBMIterateResult *tbmres)
|
||||
HeapTupleData heapTuple;
|
||||
|
||||
ItemPointerSet(&tid, page, offnum);
|
||||
if (heap_hot_search_buffer(&tid, scan->rs_rd, buffer, snapshot,
|
||||
&heapTuple, NULL, true))
|
||||
if (heap_hot_search_buffer(&tid, scan->rs_base.rs_rd, buffer,
|
||||
snapshot, &heapTuple, NULL, true))
|
||||
scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
|
||||
}
|
||||
}
|
||||
@ -456,16 +459,16 @@ bitgetpage(HeapScanDesc scan, TBMIterateResult *tbmres)
|
||||
continue;
|
||||
loctup.t_data = (HeapTupleHeader) PageGetItem((Page) dp, lp);
|
||||
loctup.t_len = ItemIdGetLength(lp);
|
||||
loctup.t_tableOid = scan->rs_rd->rd_id;
|
||||
loctup.t_tableOid = scan->rs_base.rs_rd->rd_id;
|
||||
ItemPointerSet(&loctup.t_self, page, offnum);
|
||||
valid = HeapTupleSatisfiesVisibility(&loctup, snapshot, buffer);
|
||||
if (valid)
|
||||
{
|
||||
scan->rs_vistuples[ntup++] = offnum;
|
||||
PredicateLockTuple(scan->rs_rd, &loctup, snapshot);
|
||||
PredicateLockTuple(scan->rs_base.rs_rd, &loctup, snapshot);
|
||||
}
|
||||
CheckForSerializableConflictOut(valid, scan->rs_rd, &loctup,
|
||||
buffer, snapshot);
|
||||
CheckForSerializableConflictOut(valid, scan->rs_base.rs_rd,
|
||||
&loctup, buffer, snapshot);
|
||||
}
|
||||
}
|
||||
|
||||
@ -598,7 +601,7 @@ BitmapAdjustPrefetchTarget(BitmapHeapScanState *node)
|
||||
* BitmapPrefetch - Prefetch, if prefetch_pages are behind prefetch_target
|
||||
*/
|
||||
static inline void
|
||||
BitmapPrefetch(BitmapHeapScanState *node, HeapScanDesc scan)
|
||||
BitmapPrefetch(BitmapHeapScanState *node, TableScanDesc scan)
|
||||
{
|
||||
#ifdef USE_PREFETCH
|
||||
ParallelBitmapHeapState *pstate = node->pstate;
|
||||
@ -741,7 +744,7 @@ ExecReScanBitmapHeapScan(BitmapHeapScanState *node)
|
||||
PlanState *outerPlan = outerPlanState(node);
|
||||
|
||||
/* rescan to release any page pin */
|
||||
heap_rescan(node->ss.ss_currentScanDesc, NULL);
|
||||
table_rescan(node->ss.ss_currentScanDesc, NULL);
|
||||
|
||||
/* release bitmaps and buffers if any */
|
||||
if (node->tbmiterator)
|
||||
@ -785,7 +788,7 @@ ExecReScanBitmapHeapScan(BitmapHeapScanState *node)
|
||||
void
|
||||
ExecEndBitmapHeapScan(BitmapHeapScanState *node)
|
||||
{
|
||||
HeapScanDesc scanDesc;
|
||||
TableScanDesc scanDesc;
|
||||
|
||||
/*
|
||||
* extract information from the node
|
||||
@ -830,7 +833,7 @@ ExecEndBitmapHeapScan(BitmapHeapScanState *node)
|
||||
/*
|
||||
* close heap scan
|
||||
*/
|
||||
heap_endscan(scanDesc);
|
||||
table_endscan(scanDesc);
|
||||
}
|
||||
|
||||
/* ----------------------------------------------------------------
|
||||
@ -914,8 +917,7 @@ ExecInitBitmapHeapScan(BitmapHeapScan *node, EState *estate, int eflags)
|
||||
*/
|
||||
ExecInitScanTupleSlot(estate, &scanstate->ss,
|
||||
RelationGetDescr(currentRelation),
|
||||
&TTSOpsBufferHeapTuple);
|
||||
|
||||
table_slot_callbacks(currentRelation));
|
||||
|
||||
/*
|
||||
* Initialize result type and projection.
|
||||
@ -953,10 +955,10 @@ ExecInitBitmapHeapScan(BitmapHeapScan *node, EState *estate, int eflags)
|
||||
* Even though we aren't going to do a conventional seqscan, it is useful
|
||||
* to create a HeapScanDesc --- most of the fields in it are usable.
|
||||
*/
|
||||
scanstate->ss.ss_currentScanDesc = heap_beginscan_bm(currentRelation,
|
||||
estate->es_snapshot,
|
||||
0,
|
||||
NULL);
|
||||
scanstate->ss.ss_currentScanDesc = table_beginscan_bm(currentRelation,
|
||||
estate->es_snapshot,
|
||||
0,
|
||||
NULL);
|
||||
|
||||
/*
|
||||
* all done.
|
||||
@ -1104,5 +1106,5 @@ ExecBitmapHeapInitializeWorker(BitmapHeapScanState *node,
|
||||
node->pstate = pstate;
|
||||
|
||||
snapshot = RestoreSnapshot(pstate->phs_snapshot_data);
|
||||
heap_update_snapshot(node->ss.ss_currentScanDesc, snapshot);
|
||||
table_scan_update_snapshot(node->ss.ss_currentScanDesc, snapshot);
|
||||
}
|
||||
|
@ -32,6 +32,7 @@
|
||||
|
||||
#include "access/genam.h"
|
||||
#include "access/relscan.h"
|
||||
#include "access/tableam.h"
|
||||
#include "access/tupdesc.h"
|
||||
#include "access/visibilitymap.h"
|
||||
#include "executor/execdebug.h"
|
||||
@ -119,7 +120,7 @@ IndexOnlyNext(IndexOnlyScanState *node)
|
||||
*/
|
||||
while ((tid = index_getnext_tid(scandesc, direction)) != NULL)
|
||||
{
|
||||
HeapTuple tuple = NULL;
|
||||
bool tuple_from_heap = false;
|
||||
|
||||
CHECK_FOR_INTERRUPTS();
|
||||
|
||||
@ -165,17 +166,18 @@ IndexOnlyNext(IndexOnlyScanState *node)
|
||||
* Rats, we have to visit the heap to check visibility.
|
||||
*/
|
||||
InstrCountTuples2(node, 1);
|
||||
tuple = index_fetch_heap(scandesc);
|
||||
if (tuple == NULL)
|
||||
if (!index_fetch_heap(scandesc, slot))
|
||||
continue; /* no visible tuple, try next index entry */
|
||||
|
||||
ExecClearTuple(slot);
|
||||
|
||||
/*
|
||||
* Only MVCC snapshots are supported here, so there should be no
|
||||
* need to keep following the HOT chain once a visible entry has
|
||||
* been found. If we did want to allow that, we'd need to keep
|
||||
* more state to remember not to call index_getnext_tid next time.
|
||||
*/
|
||||
if (scandesc->xs_continue_hot)
|
||||
if (scandesc->xs_heap_continue)
|
||||
elog(ERROR, "non-MVCC snapshots are not supported in index-only scans");
|
||||
|
||||
/*
|
||||
@ -184,13 +186,15 @@ IndexOnlyNext(IndexOnlyScanState *node)
|
||||
* but it's not clear whether it's a win to do so. The next index
|
||||
* entry might require a visit to the same heap page.
|
||||
*/
|
||||
|
||||
tuple_from_heap = true;
|
||||
}
|
||||
|
||||
/*
|
||||
* Fill the scan tuple slot with data from the index. This might be
|
||||
* provided in either HeapTuple or IndexTuple format. Conceivably an
|
||||
* index AM might fill both fields, in which case we prefer the heap
|
||||
* format, since it's probably a bit cheaper to fill a slot from.
|
||||
* provided in either HeapTuple or IndexTuple format. Conceivably
|
||||
* an index AM might fill both fields, in which case we prefer the
|
||||
* heap format, since it's probably a bit cheaper to fill a slot from.
|
||||
*/
|
||||
if (scandesc->xs_hitup)
|
||||
{
|
||||
@ -201,7 +205,7 @@ IndexOnlyNext(IndexOnlyScanState *node)
|
||||
*/
|
||||
Assert(slot->tts_tupleDescriptor->natts ==
|
||||
scandesc->xs_hitupdesc->natts);
|
||||
ExecStoreHeapTuple(scandesc->xs_hitup, slot, false);
|
||||
ExecForceStoreHeapTuple(scandesc->xs_hitup, slot);
|
||||
}
|
||||
else if (scandesc->xs_itup)
|
||||
StoreIndexTuple(slot, scandesc->xs_itup, scandesc->xs_itupdesc);
|
||||
@ -244,7 +248,7 @@ IndexOnlyNext(IndexOnlyScanState *node)
|
||||
* anyway, then we already have the tuple-level lock and can skip the
|
||||
* page lock.
|
||||
*/
|
||||
if (tuple == NULL)
|
||||
if (!tuple_from_heap)
|
||||
PredicateLockPage(scandesc->heapRelation,
|
||||
ItemPointerGetBlockNumber(tid),
|
||||
estate->es_snapshot);
|
||||
@ -523,7 +527,8 @@ ExecInitIndexOnlyScan(IndexOnlyScan *node, EState *estate, int eflags)
|
||||
* suitable data anyway.)
|
||||
*/
|
||||
tupDesc = ExecTypeFromTL(node->indextlist);
|
||||
ExecInitScanTupleSlot(estate, &indexstate->ss, tupDesc, &TTSOpsHeapTuple);
|
||||
ExecInitScanTupleSlot(estate, &indexstate->ss, tupDesc,
|
||||
table_slot_callbacks(currentRelation));
|
||||
|
||||
/*
|
||||
* Initialize result type and projection info. The node's targetlist will
|
||||
|
@ -31,6 +31,7 @@
|
||||
|
||||
#include "access/nbtree.h"
|
||||
#include "access/relscan.h"
|
||||
#include "access/tableam.h"
|
||||
#include "catalog/pg_am.h"
|
||||
#include "executor/execdebug.h"
|
||||
#include "executor/nodeIndexscan.h"
|
||||
@ -64,7 +65,7 @@ static int cmp_orderbyvals(const Datum *adist, const bool *anulls,
|
||||
IndexScanState *node);
|
||||
static int reorderqueue_cmp(const pairingheap_node *a,
|
||||
const pairingheap_node *b, void *arg);
|
||||
static void reorderqueue_push(IndexScanState *node, HeapTuple tuple,
|
||||
static void reorderqueue_push(IndexScanState *node, TupleTableSlot *slot,
|
||||
Datum *orderbyvals, bool *orderbynulls);
|
||||
static HeapTuple reorderqueue_pop(IndexScanState *node);
|
||||
|
||||
@ -83,7 +84,6 @@ IndexNext(IndexScanState *node)
|
||||
ExprContext *econtext;
|
||||
ScanDirection direction;
|
||||
IndexScanDesc scandesc;
|
||||
HeapTuple tuple;
|
||||
TupleTableSlot *slot;
|
||||
|
||||
/*
|
||||
@ -130,20 +130,10 @@ IndexNext(IndexScanState *node)
|
||||
/*
|
||||
* ok, now that we have what we need, fetch the next tuple.
|
||||
*/
|
||||
while ((tuple = index_getnext(scandesc, direction)) != NULL)
|
||||
while (index_getnext_slot(scandesc, direction, slot))
|
||||
{
|
||||
CHECK_FOR_INTERRUPTS();
|
||||
|
||||
/*
|
||||
* Store the scanned tuple in the scan tuple slot of the scan state.
|
||||
* Note: we pass 'false' because tuples returned by amgetnext are
|
||||
* pointers onto disk pages and must not be pfree()'d.
|
||||
*/
|
||||
ExecStoreBufferHeapTuple(tuple, /* tuple to store */
|
||||
slot, /* slot to store in */
|
||||
scandesc->xs_cbuf); /* buffer containing
|
||||
* tuple */
|
||||
|
||||
/*
|
||||
* If the index was lossy, we have to recheck the index quals using
|
||||
* the fetched tuple.
|
||||
@ -183,7 +173,6 @@ IndexNextWithReorder(IndexScanState *node)
|
||||
EState *estate;
|
||||
ExprContext *econtext;
|
||||
IndexScanDesc scandesc;
|
||||
HeapTuple tuple;
|
||||
TupleTableSlot *slot;
|
||||
ReorderTuple *topmost = NULL;
|
||||
bool was_exact;
|
||||
@ -252,6 +241,8 @@ IndexNextWithReorder(IndexScanState *node)
|
||||
scandesc->xs_orderbynulls,
|
||||
node) <= 0)
|
||||
{
|
||||
HeapTuple tuple;
|
||||
|
||||
tuple = reorderqueue_pop(node);
|
||||
|
||||
/* Pass 'true', as the tuple in the queue is a palloc'd copy */
|
||||
@ -271,8 +262,7 @@ IndexNextWithReorder(IndexScanState *node)
|
||||
*/
|
||||
next_indextuple:
|
||||
slot = node->ss.ss_ScanTupleSlot;
|
||||
tuple = index_getnext(scandesc, ForwardScanDirection);
|
||||
if (!tuple)
|
||||
if (!index_getnext_slot(scandesc, ForwardScanDirection, slot))
|
||||
{
|
||||
/*
|
||||
* No more tuples from the index. But we still need to drain any
|
||||
@ -282,14 +272,6 @@ next_indextuple:
|
||||
continue;
|
||||
}
|
||||
|
||||
/*
|
||||
* Store the scanned tuple in the scan tuple slot of the scan state.
|
||||
*/
|
||||
ExecStoreBufferHeapTuple(tuple, /* tuple to store */
|
||||
slot, /* slot to store in */
|
||||
scandesc->xs_cbuf); /* buffer containing
|
||||
* tuple */
|
||||
|
||||
/*
|
||||
* If the index was lossy, we have to recheck the index quals and
|
||||
* ORDER BY expressions using the fetched tuple.
|
||||
@ -358,7 +340,7 @@ next_indextuple:
|
||||
node) > 0))
|
||||
{
|
||||
/* Put this tuple to the queue */
|
||||
reorderqueue_push(node, tuple, lastfetched_vals, lastfetched_nulls);
|
||||
reorderqueue_push(node, slot, lastfetched_vals, lastfetched_nulls);
|
||||
continue;
|
||||
}
|
||||
else
|
||||
@ -478,7 +460,7 @@ reorderqueue_cmp(const pairingheap_node *a, const pairingheap_node *b,
|
||||
* Helper function to push a tuple to the reorder queue.
|
||||
*/
|
||||
static void
|
||||
reorderqueue_push(IndexScanState *node, HeapTuple tuple,
|
||||
reorderqueue_push(IndexScanState *node, TupleTableSlot *slot,
|
||||
Datum *orderbyvals, bool *orderbynulls)
|
||||
{
|
||||
IndexScanDesc scandesc = node->iss_ScanDesc;
|
||||
@ -488,7 +470,7 @@ reorderqueue_push(IndexScanState *node, HeapTuple tuple,
|
||||
int i;
|
||||
|
||||
rt = (ReorderTuple *) palloc(sizeof(ReorderTuple));
|
||||
rt->htup = heap_copytuple(tuple);
|
||||
rt->htup = ExecCopySlotHeapTuple(slot);
|
||||
rt->orderbyvals =
|
||||
(Datum *) palloc(sizeof(Datum) * scandesc->numberOfOrderBys);
|
||||
rt->orderbynulls =
|
||||
@ -949,7 +931,7 @@ ExecInitIndexScan(IndexScan *node, EState *estate, int eflags)
|
||||
*/
|
||||
ExecInitScanTupleSlot(estate, &indexstate->ss,
|
||||
RelationGetDescr(currentRelation),
|
||||
&TTSOpsBufferHeapTuple);
|
||||
table_slot_callbacks(currentRelation));
|
||||
|
||||
/*
|
||||
* Initialize result type and projection.
|
||||
|
@ -39,6 +39,7 @@
|
||||
|
||||
#include "access/heapam.h"
|
||||
#include "access/htup_details.h"
|
||||
#include "access/tableam.h"
|
||||
#include "access/xact.h"
|
||||
#include "catalog/catalog.h"
|
||||
#include "commands/trigger.h"
|
||||
@ -2147,7 +2148,7 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags)
|
||||
mtstate->mt_plans[i] = ExecInitNode(subplan, estate, eflags);
|
||||
mtstate->mt_scans[i] =
|
||||
ExecInitExtraTupleSlot(mtstate->ps.state, ExecGetResultType(mtstate->mt_plans[i]),
|
||||
&TTSOpsHeapTuple);
|
||||
table_slot_callbacks(resultRelInfo->ri_RelationDesc));
|
||||
|
||||
/* Also let FDWs init themselves for foreign-table result rels */
|
||||
if (!resultRelInfo->ri_usesFdwDirectModify &&
|
||||
@ -2207,8 +2208,7 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags)
|
||||
if (update_tuple_routing_needed)
|
||||
{
|
||||
ExecSetupChildParentMapForSubplan(mtstate);
|
||||
mtstate->mt_root_tuple_slot = MakeTupleTableSlot(RelationGetDescr(rel),
|
||||
&TTSOpsHeapTuple);
|
||||
mtstate->mt_root_tuple_slot = table_slot_create(rel, NULL);
|
||||
}
|
||||
|
||||
/*
|
||||
@ -2320,8 +2320,8 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags)
|
||||
|
||||
/* initialize slot for the existing tuple */
|
||||
resultRelInfo->ri_onConflict->oc_Existing =
|
||||
ExecInitExtraTupleSlot(mtstate->ps.state, relationDesc,
|
||||
&TTSOpsBufferHeapTuple);
|
||||
table_slot_create(resultRelInfo->ri_RelationDesc,
|
||||
&mtstate->ps.state->es_tupleTable);
|
||||
|
||||
/* create the tuple slot for the UPDATE SET projection */
|
||||
tupDesc = ExecTypeFromTL((List *) node->onConflictSet);
|
||||
@ -2430,15 +2430,18 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags)
|
||||
for (i = 0; i < nplans; i++)
|
||||
{
|
||||
JunkFilter *j;
|
||||
TupleTableSlot *junkresslot;
|
||||
|
||||
subplan = mtstate->mt_plans[i]->plan;
|
||||
if (operation == CMD_INSERT || operation == CMD_UPDATE)
|
||||
ExecCheckPlanOutput(resultRelInfo->ri_RelationDesc,
|
||||
subplan->targetlist);
|
||||
|
||||
junkresslot =
|
||||
ExecInitExtraTupleSlot(estate, NULL,
|
||||
table_slot_callbacks(resultRelInfo->ri_RelationDesc));
|
||||
j = ExecInitJunkFilter(subplan->targetlist,
|
||||
ExecInitExtraTupleSlot(estate, NULL,
|
||||
&TTSOpsHeapTuple));
|
||||
junkresslot);
|
||||
|
||||
if (operation == CMD_UPDATE || operation == CMD_DELETE)
|
||||
{
|
||||
|
@ -16,6 +16,7 @@
|
||||
|
||||
#include "access/heapam.h"
|
||||
#include "access/relscan.h"
|
||||
#include "access/tableam.h"
|
||||
#include "access/tsmapi.h"
|
||||
#include "executor/executor.h"
|
||||
#include "executor/nodeSamplescan.h"
|
||||
@ -48,6 +49,7 @@ SampleNext(SampleScanState *node)
|
||||
{
|
||||
HeapTuple tuple;
|
||||
TupleTableSlot *slot;
|
||||
HeapScanDesc hscan;
|
||||
|
||||
/*
|
||||
* if this is first call within a scan, initialize
|
||||
@ -61,11 +63,12 @@ SampleNext(SampleScanState *node)
|
||||
tuple = tablesample_getnext(node);
|
||||
|
||||
slot = node->ss.ss_ScanTupleSlot;
|
||||
hscan = (HeapScanDesc) node->ss.ss_currentScanDesc;
|
||||
|
||||
if (tuple)
|
||||
ExecStoreBufferHeapTuple(tuple, /* tuple to store */
|
||||
slot, /* slot to store in */
|
||||
node->ss.ss_currentScanDesc->rs_cbuf); /* tuple's buffer */
|
||||
hscan->rs_cbuf); /* tuple's buffer */
|
||||
else
|
||||
ExecClearTuple(slot);
|
||||
|
||||
@ -147,7 +150,7 @@ ExecInitSampleScan(SampleScan *node, EState *estate, int eflags)
|
||||
/* and create slot with appropriate rowtype */
|
||||
ExecInitScanTupleSlot(estate, &scanstate->ss,
|
||||
RelationGetDescr(scanstate->ss.ss_currentRelation),
|
||||
&TTSOpsBufferHeapTuple);
|
||||
table_slot_callbacks(scanstate->ss.ss_currentRelation));
|
||||
|
||||
/*
|
||||
* Initialize result type and projection.
|
||||
@ -219,7 +222,7 @@ ExecEndSampleScan(SampleScanState *node)
|
||||
* close heap scan
|
||||
*/
|
||||
if (node->ss.ss_currentScanDesc)
|
||||
heap_endscan(node->ss.ss_currentScanDesc);
|
||||
table_endscan(node->ss.ss_currentScanDesc);
|
||||
}
|
||||
|
||||
/* ----------------------------------------------------------------
|
||||
@ -319,19 +322,19 @@ tablesample_init(SampleScanState *scanstate)
|
||||
if (scanstate->ss.ss_currentScanDesc == NULL)
|
||||
{
|
||||
scanstate->ss.ss_currentScanDesc =
|
||||
heap_beginscan_sampling(scanstate->ss.ss_currentRelation,
|
||||
scanstate->ss.ps.state->es_snapshot,
|
||||
0, NULL,
|
||||
scanstate->use_bulkread,
|
||||
allow_sync,
|
||||
scanstate->use_pagemode);
|
||||
table_beginscan_sampling(scanstate->ss.ss_currentRelation,
|
||||
scanstate->ss.ps.state->es_snapshot,
|
||||
0, NULL,
|
||||
scanstate->use_bulkread,
|
||||
allow_sync,
|
||||
scanstate->use_pagemode);
|
||||
}
|
||||
else
|
||||
{
|
||||
heap_rescan_set_params(scanstate->ss.ss_currentScanDesc, NULL,
|
||||
scanstate->use_bulkread,
|
||||
allow_sync,
|
||||
scanstate->use_pagemode);
|
||||
table_rescan_set_params(scanstate->ss.ss_currentScanDesc, NULL,
|
||||
scanstate->use_bulkread,
|
||||
allow_sync,
|
||||
scanstate->use_pagemode);
|
||||
}
|
||||
|
||||
pfree(params);
|
||||
@ -350,8 +353,9 @@ static HeapTuple
|
||||
tablesample_getnext(SampleScanState *scanstate)
|
||||
{
|
||||
TsmRoutine *tsm = scanstate->tsmroutine;
|
||||
HeapScanDesc scan = scanstate->ss.ss_currentScanDesc;
|
||||
HeapTuple tuple = &(scan->rs_ctup);
|
||||
TableScanDesc scan = scanstate->ss.ss_currentScanDesc;
|
||||
HeapScanDesc hscan = (HeapScanDesc) scan;
|
||||
HeapTuple tuple = &(hscan->rs_ctup);
|
||||
Snapshot snapshot = scan->rs_snapshot;
|
||||
bool pagemode = scan->rs_pageatatime;
|
||||
BlockNumber blockno;
|
||||
@ -359,14 +363,14 @@ tablesample_getnext(SampleScanState *scanstate)
|
||||
bool all_visible;
|
||||
OffsetNumber maxoffset;
|
||||
|
||||
if (!scan->rs_inited)
|
||||
if (!hscan->rs_inited)
|
||||
{
|
||||
/*
|
||||
* return null immediately if relation is empty
|
||||
*/
|
||||
if (scan->rs_nblocks == 0)
|
||||
if (hscan->rs_nblocks == 0)
|
||||
{
|
||||
Assert(!BufferIsValid(scan->rs_cbuf));
|
||||
Assert(!BufferIsValid(hscan->rs_cbuf));
|
||||
tuple->t_data = NULL;
|
||||
return NULL;
|
||||
}
|
||||
@ -380,15 +384,15 @@ tablesample_getnext(SampleScanState *scanstate)
|
||||
}
|
||||
}
|
||||
else
|
||||
blockno = scan->rs_startblock;
|
||||
Assert(blockno < scan->rs_nblocks);
|
||||
blockno = hscan->rs_startblock;
|
||||
Assert(blockno < hscan->rs_nblocks);
|
||||
heapgetpage(scan, blockno);
|
||||
scan->rs_inited = true;
|
||||
hscan->rs_inited = true;
|
||||
}
|
||||
else
|
||||
{
|
||||
/* continue from previously returned page/tuple */
|
||||
blockno = scan->rs_cblock; /* current page */
|
||||
blockno = hscan->rs_cblock; /* current page */
|
||||
}
|
||||
|
||||
/*
|
||||
@ -396,9 +400,9 @@ tablesample_getnext(SampleScanState *scanstate)
|
||||
* visibility checks.
|
||||
*/
|
||||
if (!pagemode)
|
||||
LockBuffer(scan->rs_cbuf, BUFFER_LOCK_SHARE);
|
||||
LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_SHARE);
|
||||
|
||||
page = (Page) BufferGetPage(scan->rs_cbuf);
|
||||
page = (Page) BufferGetPage(hscan->rs_cbuf);
|
||||
all_visible = PageIsAllVisible(page) && !snapshot->takenDuringRecovery;
|
||||
maxoffset = PageGetMaxOffsetNumber(page);
|
||||
|
||||
@ -431,18 +435,18 @@ tablesample_getnext(SampleScanState *scanstate)
|
||||
if (all_visible)
|
||||
visible = true;
|
||||
else
|
||||
visible = SampleTupleVisible(tuple, tupoffset, scan);
|
||||
visible = SampleTupleVisible(tuple, tupoffset, hscan);
|
||||
|
||||
/* in pagemode, heapgetpage did this for us */
|
||||
if (!pagemode)
|
||||
CheckForSerializableConflictOut(visible, scan->rs_rd, tuple,
|
||||
scan->rs_cbuf, snapshot);
|
||||
hscan->rs_cbuf, snapshot);
|
||||
|
||||
if (visible)
|
||||
{
|
||||
/* Found visible tuple, return it. */
|
||||
if (!pagemode)
|
||||
LockBuffer(scan->rs_cbuf, BUFFER_LOCK_UNLOCK);
|
||||
LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_UNLOCK);
|
||||
break;
|
||||
}
|
||||
else
|
||||
@ -457,7 +461,7 @@ tablesample_getnext(SampleScanState *scanstate)
|
||||
* it's time to move to the next.
|
||||
*/
|
||||
if (!pagemode)
|
||||
LockBuffer(scan->rs_cbuf, BUFFER_LOCK_UNLOCK);
|
||||
LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_UNLOCK);
|
||||
|
||||
if (tsm->NextSampleBlock)
|
||||
{
|
||||
@ -469,7 +473,7 @@ tablesample_getnext(SampleScanState *scanstate)
|
||||
{
|
||||
/* Without NextSampleBlock, just do a plain forward seqscan. */
|
||||
blockno++;
|
||||
if (blockno >= scan->rs_nblocks)
|
||||
if (blockno >= hscan->rs_nblocks)
|
||||
blockno = 0;
|
||||
|
||||
/*
|
||||
@ -485,7 +489,7 @@ tablesample_getnext(SampleScanState *scanstate)
|
||||
if (scan->rs_syncscan)
|
||||
ss_report_location(scan->rs_rd, blockno);
|
||||
|
||||
finished = (blockno == scan->rs_startblock);
|
||||
finished = (blockno == hscan->rs_startblock);
|
||||
}
|
||||
|
||||
/*
|
||||
@ -493,23 +497,23 @@ tablesample_getnext(SampleScanState *scanstate)
|
||||
*/
|
||||
if (finished)
|
||||
{
|
||||
if (BufferIsValid(scan->rs_cbuf))
|
||||
ReleaseBuffer(scan->rs_cbuf);
|
||||
scan->rs_cbuf = InvalidBuffer;
|
||||
scan->rs_cblock = InvalidBlockNumber;
|
||||
if (BufferIsValid(hscan->rs_cbuf))
|
||||
ReleaseBuffer(hscan->rs_cbuf);
|
||||
hscan->rs_cbuf = InvalidBuffer;
|
||||
hscan->rs_cblock = InvalidBlockNumber;
|
||||
tuple->t_data = NULL;
|
||||
scan->rs_inited = false;
|
||||
hscan->rs_inited = false;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
Assert(blockno < scan->rs_nblocks);
|
||||
Assert(blockno < hscan->rs_nblocks);
|
||||
heapgetpage(scan, blockno);
|
||||
|
||||
/* Re-establish state for new page */
|
||||
if (!pagemode)
|
||||
LockBuffer(scan->rs_cbuf, BUFFER_LOCK_SHARE);
|
||||
LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_SHARE);
|
||||
|
||||
page = (Page) BufferGetPage(scan->rs_cbuf);
|
||||
page = (Page) BufferGetPage(hscan->rs_cbuf);
|
||||
all_visible = PageIsAllVisible(page) && !snapshot->takenDuringRecovery;
|
||||
maxoffset = PageGetMaxOffsetNumber(page);
|
||||
}
|
||||
@ -517,7 +521,7 @@ tablesample_getnext(SampleScanState *scanstate)
|
||||
/* Count successfully-fetched tuples as heap fetches */
|
||||
pgstat_count_heap_getnext(scan->rs_rd);
|
||||
|
||||
return &(scan->rs_ctup);
|
||||
return &(hscan->rs_ctup);
|
||||
}
|
||||
|
||||
/*
|
||||
@ -526,7 +530,7 @@ tablesample_getnext(SampleScanState *scanstate)
|
||||
static bool
|
||||
SampleTupleVisible(HeapTuple tuple, OffsetNumber tupoffset, HeapScanDesc scan)
|
||||
{
|
||||
if (scan->rs_pageatatime)
|
||||
if (scan->rs_base.rs_pageatatime)
|
||||
{
|
||||
/*
|
||||
* In pageatatime mode, heapgetpage() already did visibility checks,
|
||||
@ -559,7 +563,7 @@ SampleTupleVisible(HeapTuple tuple, OffsetNumber tupoffset, HeapScanDesc scan)
|
||||
{
|
||||
/* Otherwise, we have to check the tuple individually. */
|
||||
return HeapTupleSatisfiesVisibility(tuple,
|
||||
scan->rs_snapshot,
|
||||
scan->rs_base.rs_snapshot,
|
||||
scan->rs_cbuf);
|
||||
}
|
||||
}
|
||||
|
@ -27,8 +27,8 @@
|
||||
*/
|
||||
#include "postgres.h"
|
||||
|
||||
#include "access/heapam.h"
|
||||
#include "access/relscan.h"
|
||||
#include "access/tableam.h"
|
||||
#include "executor/execdebug.h"
|
||||
#include "executor/nodeSeqscan.h"
|
||||
#include "utils/rel.h"
|
||||
@ -49,8 +49,7 @@ static TupleTableSlot *SeqNext(SeqScanState *node);
|
||||
static TupleTableSlot *
|
||||
SeqNext(SeqScanState *node)
|
||||
{
|
||||
HeapTuple tuple;
|
||||
HeapScanDesc scandesc;
|
||||
TableScanDesc scandesc;
|
||||
EState *estate;
|
||||
ScanDirection direction;
|
||||
TupleTableSlot *slot;
|
||||
@ -69,34 +68,18 @@ SeqNext(SeqScanState *node)
|
||||
* We reach here if the scan is not parallel, or if we're serially
|
||||
* executing a scan that was planned to be parallel.
|
||||
*/
|
||||
scandesc = heap_beginscan(node->ss.ss_currentRelation,
|
||||
estate->es_snapshot,
|
||||
0, NULL);
|
||||
scandesc = table_beginscan(node->ss.ss_currentRelation,
|
||||
estate->es_snapshot,
|
||||
0, NULL);
|
||||
node->ss.ss_currentScanDesc = scandesc;
|
||||
}
|
||||
|
||||
/*
|
||||
* get the next tuple from the table
|
||||
*/
|
||||
tuple = heap_getnext(scandesc, direction);
|
||||
|
||||
/*
|
||||
* save the tuple and the buffer returned to us by the access methods in
|
||||
* our scan tuple slot and return the slot. Note: we pass 'false' because
|
||||
* tuples returned by heap_getnext() are pointers onto disk pages and were
|
||||
* not created with palloc() and so should not be pfree()'d. Note also
|
||||
* that ExecStoreHeapTuple will increment the refcount of the buffer; the
|
||||
* refcount will not be dropped until the tuple table slot is cleared.
|
||||
*/
|
||||
if (tuple)
|
||||
ExecStoreBufferHeapTuple(tuple, /* tuple to store */
|
||||
slot, /* slot to store in */
|
||||
scandesc->rs_cbuf); /* buffer associated
|
||||
* with this tuple */
|
||||
else
|
||||
ExecClearTuple(slot);
|
||||
|
||||
return slot;
|
||||
if (table_scan_getnextslot(scandesc, direction, slot))
|
||||
return slot;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/*
|
||||
@ -174,7 +157,7 @@ ExecInitSeqScan(SeqScan *node, EState *estate, int eflags)
|
||||
/* and create slot with the appropriate rowtype */
|
||||
ExecInitScanTupleSlot(estate, &scanstate->ss,
|
||||
RelationGetDescr(scanstate->ss.ss_currentRelation),
|
||||
&TTSOpsBufferHeapTuple);
|
||||
table_slot_callbacks(scanstate->ss.ss_currentRelation));
|
||||
|
||||
/*
|
||||
* Initialize result type and projection.
|
||||
@ -200,7 +183,7 @@ ExecInitSeqScan(SeqScan *node, EState *estate, int eflags)
|
||||
void
|
||||
ExecEndSeqScan(SeqScanState *node)
|
||||
{
|
||||
HeapScanDesc scanDesc;
|
||||
TableScanDesc scanDesc;
|
||||
|
||||
/*
|
||||
* get information from node
|
||||
@ -223,7 +206,7 @@ ExecEndSeqScan(SeqScanState *node)
|
||||
* close heap scan
|
||||
*/
|
||||
if (scanDesc != NULL)
|
||||
heap_endscan(scanDesc);
|
||||
table_endscan(scanDesc);
|
||||
}
|
||||
|
||||
/* ----------------------------------------------------------------
|
||||
@ -240,13 +223,13 @@ ExecEndSeqScan(SeqScanState *node)
|
||||
void
|
||||
ExecReScanSeqScan(SeqScanState *node)
|
||||
{
|
||||
HeapScanDesc scan;
|
||||
TableScanDesc scan;
|
||||
|
||||
scan = node->ss.ss_currentScanDesc;
|
||||
|
||||
if (scan != NULL)
|
||||
heap_rescan(scan, /* scan desc */
|
||||
NULL); /* new scan keys */
|
||||
table_rescan(scan, /* scan desc */
|
||||
NULL); /* new scan keys */
|
||||
|
||||
ExecScanReScan((ScanState *) node);
|
||||
}
|
||||
@ -269,7 +252,8 @@ ExecSeqScanEstimate(SeqScanState *node,
|
||||
{
|
||||
EState *estate = node->ss.ps.state;
|
||||
|
||||
node->pscan_len = heap_parallelscan_estimate(estate->es_snapshot);
|
||||
node->pscan_len = table_parallelscan_estimate(node->ss.ss_currentRelation,
|
||||
estate->es_snapshot);
|
||||
shm_toc_estimate_chunk(&pcxt->estimator, node->pscan_len);
|
||||
shm_toc_estimate_keys(&pcxt->estimator, 1);
|
||||
}
|
||||
@ -285,15 +269,15 @@ ExecSeqScanInitializeDSM(SeqScanState *node,
|
||||
ParallelContext *pcxt)
|
||||
{
|
||||
EState *estate = node->ss.ps.state;
|
||||
ParallelHeapScanDesc pscan;
|
||||
ParallelTableScanDesc pscan;
|
||||
|
||||
pscan = shm_toc_allocate(pcxt->toc, node->pscan_len);
|
||||
heap_parallelscan_initialize(pscan,
|
||||
node->ss.ss_currentRelation,
|
||||
estate->es_snapshot);
|
||||
table_parallelscan_initialize(node->ss.ss_currentRelation,
|
||||
pscan,
|
||||
estate->es_snapshot);
|
||||
shm_toc_insert(pcxt->toc, node->ss.ps.plan->plan_node_id, pscan);
|
||||
node->ss.ss_currentScanDesc =
|
||||
heap_beginscan_parallel(node->ss.ss_currentRelation, pscan);
|
||||
table_beginscan_parallel(node->ss.ss_currentRelation, pscan);
|
||||
}
|
||||
|
||||
/* ----------------------------------------------------------------
|
||||
@ -306,9 +290,10 @@ void
|
||||
ExecSeqScanReInitializeDSM(SeqScanState *node,
|
||||
ParallelContext *pcxt)
|
||||
{
|
||||
HeapScanDesc scan = node->ss.ss_currentScanDesc;
|
||||
ParallelTableScanDesc pscan;
|
||||
|
||||
heap_parallelscan_reinitialize(scan->rs_parallel);
|
||||
pscan = node->ss.ss_currentScanDesc->rs_parallel;
|
||||
table_parallelscan_reinitialize(node->ss.ss_currentRelation, pscan);
|
||||
}
|
||||
|
||||
/* ----------------------------------------------------------------
|
||||
@ -321,9 +306,9 @@ void
|
||||
ExecSeqScanInitializeWorker(SeqScanState *node,
|
||||
ParallelWorkerContext *pwcxt)
|
||||
{
|
||||
ParallelHeapScanDesc pscan;
|
||||
ParallelTableScanDesc pscan;
|
||||
|
||||
pscan = shm_toc_lookup(pwcxt->toc, node->ss.ps.plan->plan_node_id, false);
|
||||
node->ss.ss_currentScanDesc =
|
||||
heap_beginscan_parallel(node->ss.ss_currentRelation, pscan);
|
||||
table_beginscan_parallel(node->ss.ss_currentRelation, pscan);
|
||||
}
|
||||
|
@ -24,6 +24,7 @@
|
||||
|
||||
#include "access/heapam.h"
|
||||
#include "access/sysattr.h"
|
||||
#include "access/tableam.h"
|
||||
#include "catalog/pg_type.h"
|
||||
#include "executor/execdebug.h"
|
||||
#include "executor/nodeTidscan.h"
|
||||
@ -538,7 +539,7 @@ ExecInitTidScan(TidScan *node, EState *estate, int eflags)
|
||||
*/
|
||||
ExecInitScanTupleSlot(estate, &tidstate->ss,
|
||||
RelationGetDescr(currentRelation),
|
||||
&TTSOpsBufferHeapTuple);
|
||||
table_slot_callbacks(currentRelation));
|
||||
|
||||
/*
|
||||
* Initialize result type and projection.
|
||||
|
Reference in New Issue
Block a user