1
0
mirror of https://github.com/postgres/postgres.git synced 2025-06-17 17:02:08 +03:00

tableam: Add and use scan APIs.

Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:

1) Heap scans need to be generalized into table scans. Do this by
   introducing TableScanDesc, which will be the "base class" for
   individual AMs. This contains the AM independent fields from
   HeapScanDesc.

   The previous heap_{beginscan,rescan,endscan} et al. have been
   replaced with a table_ version.

   There's no direct replacement for heap_getnext(), as that returned
   a HeapTuple, which is undesirable for a other AMs. Instead there's
   table_scan_getnextslot().  But note that heap_getnext() lives on,
   it's still used widely to access catalog tables.

   This is achieved by new scan_begin, scan_end, scan_rescan,
   scan_getnextslot callbacks.

2) The portion of parallel scans that's shared between backends need
   to be able to do so without the user doing per-AM work. To achieve
   that new parallelscan_{estimate, initialize, reinitialize}
   callbacks are introduced, which operate on a new
   ParallelTableScanDesc, which again can be subclassed by AMs.

   As it is likely that several AMs are going to be block oriented,
   block oriented callbacks that can be shared between such AMs are
   provided and used by heap. table_block_parallelscan_{estimate,
   intiialize, reinitialize} as callbacks, and
   table_block_parallelscan_{nextpage, init} for use in AMs. These
   operate on a ParallelBlockTableScanDesc.

3) Index scans need to be able to access tables to return a tuple, and
   there needs to be state across individual accesses to the heap to
   store state like buffers. That's now handled by introducing a
   sort-of-scan IndexFetchTable, which again is intended to be
   subclassed by individual AMs (for heap IndexFetchHeap).

   The relevant callbacks for an AM are index_fetch_{end, begin,
   reset} to create the necessary state, and index_fetch_tuple to
   retrieve an indexed tuple.  Note that index_fetch_tuple
   implementations need to be smarter than just blindly fetching the
   tuples for AMs that have optimizations similar to heap's HOT - the
   currently alive tuple in the update chain needs to be fetched if
   appropriate.

   Similar to table_scan_getnextslot(), it's undesirable to continue
   to return HeapTuples. Thus index_fetch_heap (might want to rename
   that later) now accepts a slot as an argument. Core code doesn't
   have a lot of call sites performing index scans without going
   through the systable_* API (in contrast to loads of heap_getnext
   calls and working directly with HeapTuples).

   Index scans now store the result of a search in
   IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
   target is not generally a HeapTuple anymore that seems cleaner.

To be able to sensible adapt code to use the above, two further
callbacks have been introduced:

a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
   slots capable of holding a tuple of the AMs
   type. table_slot_callbacks() and table_slot_create() are based
   upon that, but have additional logic to deal with views, foreign
   tables, etc.

   While this change could have been done separately, nearly all the
   call sites that needed to be adapted for the rest of this commit
   also would have been needed to be adapted for
   table_slot_callbacks(), making separation not worthwhile.

b) tuple_satisfies_snapshot checks whether the tuple in a slot is
   currently visible according to a snapshot. That's required as a few
   places now don't have a buffer + HeapTuple around, but a
   slot (which in heap's case internally has that information).

Additionally a few infrastructure changes were needed:

I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
   internally uses a slot to keep track of tuples. While
   systable_getnext() still returns HeapTuples, and will so for the
   foreseeable future, the index API (see 1) above) now only deals with
   slots.

The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.

Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
    https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
    https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
This commit is contained in:
Andres Freund
2019-03-11 12:46:41 -07:00
parent a478415281
commit c2fe139c20
63 changed files with 2030 additions and 1265 deletions

View File

@ -4736,12 +4736,9 @@ ATRewriteTable(AlteredTableInfo *tab, Oid OIDNewHeap, LOCKMODE lockmode)
if (newrel || needscan)
{
ExprContext *econtext;
Datum *values;
bool *isnull;
TupleTableSlot *oldslot;
TupleTableSlot *newslot;
HeapScanDesc scan;
HeapTuple tuple;
TableScanDesc scan;
MemoryContext oldCxt;
List *dropped_attrs = NIL;
ListCell *lc;
@ -4769,19 +4766,27 @@ ATRewriteTable(AlteredTableInfo *tab, Oid OIDNewHeap, LOCKMODE lockmode)
econtext = GetPerTupleExprContext(estate);
/*
* Make tuple slots for old and new tuples. Note that even when the
* tuples are the same, the tupDescs might not be (consider ADD COLUMN
* without a default).
* Create necessary tuple slots. When rewriting, two slots are needed,
* otherwise one suffices. In the case where one slot suffices, we
* need to use the new tuple descriptor, otherwise some constraints
* can't be evaluated. Note that even when the tuple layout is the
* same and no rewrite is required, the tupDescs might not be
* (consider ADD COLUMN without a default).
*/
oldslot = MakeSingleTupleTableSlot(oldTupDesc, &TTSOpsHeapTuple);
newslot = MakeSingleTupleTableSlot(newTupDesc, &TTSOpsHeapTuple);
/* Preallocate values/isnull arrays */
i = Max(newTupDesc->natts, oldTupDesc->natts);
values = (Datum *) palloc(i * sizeof(Datum));
isnull = (bool *) palloc(i * sizeof(bool));
memset(values, 0, i * sizeof(Datum));
memset(isnull, true, i * sizeof(bool));
if (tab->rewrite)
{
Assert(newrel != NULL);
oldslot = MakeSingleTupleTableSlot(oldTupDesc,
table_slot_callbacks(oldrel));
newslot = MakeSingleTupleTableSlot(newTupDesc,
table_slot_callbacks(newrel));
}
else
{
oldslot = MakeSingleTupleTableSlot(newTupDesc,
table_slot_callbacks(oldrel));
newslot = NULL;
}
/*
* Any attributes that are dropped according to the new tuple
@ -4799,7 +4804,7 @@ ATRewriteTable(AlteredTableInfo *tab, Oid OIDNewHeap, LOCKMODE lockmode)
* checking all the constraints.
*/
snapshot = RegisterSnapshot(GetLatestSnapshot());
scan = heap_beginscan(oldrel, snapshot, 0, NULL);
scan = table_beginscan(oldrel, snapshot, 0, NULL);
/*
* Switch to per-tuple memory context and reset it for each tuple
@ -4807,55 +4812,69 @@ ATRewriteTable(AlteredTableInfo *tab, Oid OIDNewHeap, LOCKMODE lockmode)
*/
oldCxt = MemoryContextSwitchTo(GetPerTupleMemoryContext(estate));
while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
while (table_scan_getnextslot(scan, ForwardScanDirection, oldslot))
{
TupleTableSlot *insertslot;
if (tab->rewrite > 0)
{
/* Extract data from old tuple */
heap_deform_tuple(tuple, oldTupDesc, values, isnull);
slot_getallattrs(oldslot);
ExecClearTuple(newslot);
/* copy attributes */
memcpy(newslot->tts_values, oldslot->tts_values,
sizeof(Datum) * oldslot->tts_nvalid);
memcpy(newslot->tts_isnull, oldslot->tts_isnull,
sizeof(bool) * oldslot->tts_nvalid);
/* Set dropped attributes to null in new tuple */
foreach(lc, dropped_attrs)
isnull[lfirst_int(lc)] = true;
newslot->tts_isnull[lfirst_int(lc)] = true;
/*
* Process supplied expressions to replace selected columns.
* Expression inputs come from the old tuple.
*/
ExecStoreHeapTuple(tuple, oldslot, false);
econtext->ecxt_scantuple = oldslot;
foreach(l, tab->newvals)
{
NewColumnValue *ex = lfirst(l);
values[ex->attnum - 1] = ExecEvalExpr(ex->exprstate,
econtext,
&isnull[ex->attnum - 1]);
newslot->tts_values[ex->attnum - 1]
= ExecEvalExpr(ex->exprstate,
econtext,
&newslot->tts_isnull[ex->attnum - 1]);
}
/*
* Form the new tuple. Note that we don't explicitly pfree it,
* since the per-tuple memory context will be reset shortly.
*/
tuple = heap_form_tuple(newTupDesc, values, isnull);
ExecStoreVirtualTuple(newslot);
/*
* Constraints might reference the tableoid column, so
* initialize t_tableOid before evaluating them.
*/
tuple->t_tableOid = RelationGetRelid(oldrel);
newslot->tts_tableOid = RelationGetRelid(oldrel);
insertslot = newslot;
}
else
{
/*
* If there's no rewrite, old and new table are guaranteed to
* have the same AM, so we can just use the old slot to
* verify new constraints etc.
*/
insertslot = oldslot;
}
/* Now check any constraints on the possibly-changed tuple */
ExecStoreHeapTuple(tuple, newslot, false);
econtext->ecxt_scantuple = newslot;
econtext->ecxt_scantuple = insertslot;
foreach(l, notnull_attrs)
{
int attn = lfirst_int(l);
if (heap_attisnull(tuple, attn + 1, newTupDesc))
if (slot_attisnull(insertslot, attn + 1))
{
Form_pg_attribute attr = TupleDescAttr(newTupDesc, attn);
@ -4905,6 +4924,9 @@ ATRewriteTable(AlteredTableInfo *tab, Oid OIDNewHeap, LOCKMODE lockmode)
/* Write the tuple out to the new relation */
if (newrel)
{
HeapTuple tuple;
tuple = ExecFetchSlotHeapTuple(newslot, true, NULL);
heap_insert(newrel, tuple, mycid, hi_options, bistate);
ItemPointerCopy(&tuple->t_self, &newslot->tts_tid);
}
@ -4915,11 +4937,12 @@ ATRewriteTable(AlteredTableInfo *tab, Oid OIDNewHeap, LOCKMODE lockmode)
}
MemoryContextSwitchTo(oldCxt);
heap_endscan(scan);
table_endscan(scan);
UnregisterSnapshot(snapshot);
ExecDropSingleTupleTableSlot(oldslot);
ExecDropSingleTupleTableSlot(newslot);
if (newslot)
ExecDropSingleTupleTableSlot(newslot);
}
FreeExecutorState(estate);
@ -5310,7 +5333,7 @@ find_typed_table_dependencies(Oid typeOid, const char *typeName, DropBehavior be
{
Relation classRel;
ScanKeyData key[1];
HeapScanDesc scan;
TableScanDesc scan;
HeapTuple tuple;
List *result = NIL;
@ -5321,7 +5344,7 @@ find_typed_table_dependencies(Oid typeOid, const char *typeName, DropBehavior be
BTEqualStrategyNumber, F_OIDEQ,
ObjectIdGetDatum(typeOid));
scan = heap_beginscan_catalog(classRel, 1, key);
scan = table_beginscan_catalog(classRel, 1, key);
while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
{
@ -5337,7 +5360,7 @@ find_typed_table_dependencies(Oid typeOid, const char *typeName, DropBehavior be
result = lappend_oid(result, classform->oid);
}
heap_endscan(scan);
table_endscan(scan);
table_close(classRel, AccessShareLock);
return result;
@ -8822,9 +8845,7 @@ validateCheckConstraint(Relation rel, HeapTuple constrtup)
char *conbin;
Expr *origexpr;
ExprState *exprstate;
TupleDesc tupdesc;
HeapScanDesc scan;
HeapTuple tuple;
TableScanDesc scan;
ExprContext *econtext;
MemoryContext oldcxt;
TupleTableSlot *slot;
@ -8859,12 +8880,11 @@ validateCheckConstraint(Relation rel, HeapTuple constrtup)
exprstate = ExecPrepareExpr(origexpr, estate);
econtext = GetPerTupleExprContext(estate);
tupdesc = RelationGetDescr(rel);
slot = MakeSingleTupleTableSlot(tupdesc, &TTSOpsHeapTuple);
slot = table_slot_create(rel, NULL);
econtext->ecxt_scantuple = slot;
snapshot = RegisterSnapshot(GetLatestSnapshot());
scan = heap_beginscan(rel, snapshot, 0, NULL);
scan = table_beginscan(rel, snapshot, 0, NULL);
/*
* Switch to per-tuple memory context and reset it for each tuple
@ -8872,10 +8892,8 @@ validateCheckConstraint(Relation rel, HeapTuple constrtup)
*/
oldcxt = MemoryContextSwitchTo(GetPerTupleMemoryContext(estate));
while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
while (table_scan_getnextslot(scan, ForwardScanDirection, slot))
{
ExecStoreHeapTuple(tuple, slot, false);
if (!ExecCheck(exprstate, econtext))
ereport(ERROR,
(errcode(ERRCODE_CHECK_VIOLATION),
@ -8887,7 +8905,7 @@ validateCheckConstraint(Relation rel, HeapTuple constrtup)
}
MemoryContextSwitchTo(oldcxt);
heap_endscan(scan);
table_endscan(scan);
UnregisterSnapshot(snapshot);
ExecDropSingleTupleTableSlot(slot);
FreeExecutorState(estate);
@ -8906,8 +8924,8 @@ validateForeignKeyConstraint(char *conname,
Oid pkindOid,
Oid constraintOid)
{
HeapScanDesc scan;
HeapTuple tuple;
TupleTableSlot *slot;
TableScanDesc scan;
Trigger trig;
Snapshot snapshot;
@ -8942,9 +8960,10 @@ validateForeignKeyConstraint(char *conname,
* ereport(ERROR) and that's that.
*/
snapshot = RegisterSnapshot(GetLatestSnapshot());
scan = heap_beginscan(rel, snapshot, 0, NULL);
slot = table_slot_create(rel, NULL);
scan = table_beginscan(rel, snapshot, 0, NULL);
while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
while (table_scan_getnextslot(scan, ForwardScanDirection, slot))
{
LOCAL_FCINFO(fcinfo, 0);
TriggerData trigdata;
@ -8962,7 +8981,8 @@ validateForeignKeyConstraint(char *conname,
trigdata.type = T_TriggerData;
trigdata.tg_event = TRIGGER_EVENT_INSERT | TRIGGER_EVENT_ROW;
trigdata.tg_relation = rel;
trigdata.tg_trigtuple = tuple;
trigdata.tg_trigtuple = ExecFetchSlotHeapTuple(slot, true, NULL);
trigdata.tg_trigslot = slot;
trigdata.tg_newtuple = NULL;
trigdata.tg_trigger = &trig;
@ -8971,8 +8991,9 @@ validateForeignKeyConstraint(char *conname,
RI_FKey_check_ins(fcinfo);
}
heap_endscan(scan);
table_endscan(scan);
UnregisterSnapshot(snapshot);
ExecDropSingleTupleTableSlot(slot);
}
static void
@ -11618,7 +11639,7 @@ AlterTableMoveAll(AlterTableMoveAllStmt *stmt)
ListCell *l;
ScanKeyData key[1];
Relation rel;
HeapScanDesc scan;
TableScanDesc scan;
HeapTuple tuple;
Oid orig_tablespaceoid;
Oid new_tablespaceoid;
@ -11683,7 +11704,7 @@ AlterTableMoveAll(AlterTableMoveAllStmt *stmt)
ObjectIdGetDatum(orig_tablespaceoid));
rel = table_open(RelationRelationId, AccessShareLock);
scan = heap_beginscan_catalog(rel, 1, key);
scan = table_beginscan_catalog(rel, 1, key);
while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
{
Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
@ -11742,7 +11763,7 @@ AlterTableMoveAll(AlterTableMoveAllStmt *stmt)
relations = lappend_oid(relations, relOid);
}
heap_endscan(scan);
table_endscan(scan);
table_close(rel, AccessShareLock);
if (relations == NIL)