mirror of
https://github.com/postgres/postgres.git
synced 2025-07-02 09:02:37 +03:00
pgindent run for 9.4
This includes removing tabs after periods in C comments, which was applied to back branches, so this change should not effect backpatching.
This commit is contained in:
@ -317,7 +317,7 @@ ExecMarkPos(PlanState *node)
|
||||
*
|
||||
* NOTE: the semantics of this are that the first ExecProcNode following
|
||||
* the restore operation will yield the same tuple as the first one following
|
||||
* the mark operation. It is unspecified what happens to the plan node's
|
||||
* the mark operation. It is unspecified what happens to the plan node's
|
||||
* result TupleTableSlot. (In most cases the result slot is unchanged by
|
||||
* a restore, but the node may choose to clear it or to load it with the
|
||||
* restored-to tuple.) Hence the caller should discard any previously
|
||||
@ -397,7 +397,7 @@ ExecSupportsMarkRestore(NodeTag plantype)
|
||||
/*
|
||||
* T_Result only supports mark/restore if it has a child plan that
|
||||
* does, so we do not have enough information to give a really
|
||||
* correct answer. However, for current uses it's enough to
|
||||
* correct answer. However, for current uses it's enough to
|
||||
* always say "false", because this routine is not asked about
|
||||
* gating Result plans, only base-case Results.
|
||||
*/
|
||||
|
@ -142,7 +142,7 @@ execCurrentOf(CurrentOfExpr *cexpr,
|
||||
|
||||
/*
|
||||
* This table didn't produce the cursor's current row; some other
|
||||
* inheritance child of the same parent must have. Signal caller to
|
||||
* inheritance child of the same parent must have. Signal caller to
|
||||
* do nothing on this table.
|
||||
*/
|
||||
return false;
|
||||
|
@ -52,7 +52,7 @@
|
||||
*
|
||||
* Initialize the Junk filter.
|
||||
*
|
||||
* The source targetlist is passed in. The output tuple descriptor is
|
||||
* The source targetlist is passed in. The output tuple descriptor is
|
||||
* built from the non-junk tlist entries, plus the passed specification
|
||||
* of whether to include room for an OID or not.
|
||||
* An optional resultSlot can be passed as well.
|
||||
|
@ -19,7 +19,7 @@
|
||||
* ExecutorRun accepts direction and count arguments that specify whether
|
||||
* the plan is to be executed forwards, backwards, and for how many tuples.
|
||||
* In some cases ExecutorRun may be called multiple times to process all
|
||||
* the tuples for a plan. It is also acceptable to stop short of executing
|
||||
* the tuples for a plan. It is also acceptable to stop short of executing
|
||||
* the whole plan (but only if it is a SELECT).
|
||||
*
|
||||
* ExecutorFinish must be called after the final ExecutorRun call and
|
||||
@ -329,12 +329,12 @@ standard_ExecutorRun(QueryDesc *queryDesc,
|
||||
* ExecutorFinish
|
||||
*
|
||||
* This routine must be called after the last ExecutorRun call.
|
||||
* It performs cleanup such as firing AFTER triggers. It is
|
||||
* It performs cleanup such as firing AFTER triggers. It is
|
||||
* separate from ExecutorEnd because EXPLAIN ANALYZE needs to
|
||||
* include these actions in the total runtime.
|
||||
*
|
||||
* We provide a function hook variable that lets loadable plugins
|
||||
* get control when ExecutorFinish is called. Such a plugin would
|
||||
* get control when ExecutorFinish is called. Such a plugin would
|
||||
* normally call standard_ExecutorFinish().
|
||||
*
|
||||
* ----------------------------------------------------------------
|
||||
@ -565,7 +565,7 @@ ExecCheckRTEPerms(RangeTblEntry *rte)
|
||||
* userid to check as: current user unless we have a setuid indication.
|
||||
*
|
||||
* Note: GetUserId() is presently fast enough that there's no harm in
|
||||
* calling it separately for each RTE. If that stops being true, we could
|
||||
* calling it separately for each RTE. If that stops being true, we could
|
||||
* call it once in ExecCheckRTPerms and pass the userid down from there.
|
||||
* But for now, no need for the extra clutter.
|
||||
*/
|
||||
@ -1184,7 +1184,7 @@ InitResultRelInfo(ResultRelInfo *resultRelInfo,
|
||||
* if so it doesn't matter which one we pick.) However, it is sometimes
|
||||
* necessary to fire triggers on other relations; this happens mainly when an
|
||||
* RI update trigger queues additional triggers on other relations, which will
|
||||
* be processed in the context of the outer query. For efficiency's sake,
|
||||
* be processed in the context of the outer query. For efficiency's sake,
|
||||
* we want to have a ResultRelInfo for those triggers too; that can avoid
|
||||
* repeated re-opening of the relation. (It also provides a way for EXPLAIN
|
||||
* ANALYZE to report the runtimes of such triggers.) So we make additional
|
||||
@ -1221,7 +1221,7 @@ ExecGetTriggerResultRel(EState *estate, Oid relid)
|
||||
/*
|
||||
* Open the target relation's relcache entry. We assume that an
|
||||
* appropriate lock is still held by the backend from whenever the trigger
|
||||
* event got queued, so we need take no new lock here. Also, we need not
|
||||
* event got queued, so we need take no new lock here. Also, we need not
|
||||
* recheck the relkind, so no need for CheckValidResultRel.
|
||||
*/
|
||||
rel = heap_open(relid, NoLock);
|
||||
@ -1327,7 +1327,7 @@ ExecPostprocessPlan(EState *estate)
|
||||
|
||||
/*
|
||||
* Run any secondary ModifyTable nodes to completion, in case the main
|
||||
* query did not fetch all rows from them. (We do this to ensure that
|
||||
* query did not fetch all rows from them. (We do this to ensure that
|
||||
* such nodes have predictable results.)
|
||||
*/
|
||||
foreach(lc, estate->es_auxmodifytables)
|
||||
@ -1639,7 +1639,8 @@ ExecWithCheckOptions(ResultRelInfo *resultRelInfo,
|
||||
TupleTableSlot *slot, EState *estate)
|
||||
{
|
||||
ExprContext *econtext;
|
||||
ListCell *l1, *l2;
|
||||
ListCell *l1,
|
||||
*l2;
|
||||
|
||||
/*
|
||||
* We will use the EState's per-tuple context for evaluating constraint
|
||||
@ -1655,7 +1656,7 @@ ExecWithCheckOptions(ResultRelInfo *resultRelInfo,
|
||||
l2, resultRelInfo->ri_WithCheckOptionExprs)
|
||||
{
|
||||
WithCheckOption *wco = (WithCheckOption *) lfirst(l1);
|
||||
ExprState *wcoExpr = (ExprState *) lfirst(l2);
|
||||
ExprState *wcoExpr = (ExprState *) lfirst(l2);
|
||||
|
||||
/*
|
||||
* WITH CHECK OPTION checks are intended to ensure that the new tuple
|
||||
@ -1667,8 +1668,8 @@ ExecWithCheckOptions(ResultRelInfo *resultRelInfo,
|
||||
if (!ExecQual((List *) wcoExpr, econtext, false))
|
||||
ereport(ERROR,
|
||||
(errcode(ERRCODE_WITH_CHECK_OPTION_VIOLATION),
|
||||
errmsg("new row violates WITH CHECK OPTION for view \"%s\"",
|
||||
wco->viewname),
|
||||
errmsg("new row violates WITH CHECK OPTION for view \"%s\"",
|
||||
wco->viewname),
|
||||
errdetail("Failing row contains %s.",
|
||||
ExecBuildSlotValueDescription(slot,
|
||||
RelationGetDescr(resultRelInfo->ri_RelationDesc),
|
||||
@ -1681,7 +1682,7 @@ ExecWithCheckOptions(ResultRelInfo *resultRelInfo,
|
||||
*
|
||||
* This is intentionally very similar to BuildIndexValueDescription, but
|
||||
* unlike that function, we truncate long field values (to at most maxfieldlen
|
||||
* bytes). That seems necessary here since heap field values could be very
|
||||
* bytes). That seems necessary here since heap field values could be very
|
||||
* long, whereas index entries typically aren't so wide.
|
||||
*
|
||||
* Also, unlike the case with index entries, we need to be prepared to ignore
|
||||
@ -1875,7 +1876,7 @@ EvalPlanQual(EState *estate, EPQState *epqstate,
|
||||
*tid = copyTuple->t_self;
|
||||
|
||||
/*
|
||||
* Need to run a recheck subquery. Initialize or reinitialize EPQ state.
|
||||
* Need to run a recheck subquery. Initialize or reinitialize EPQ state.
|
||||
*/
|
||||
EvalPlanQualBegin(epqstate, estate);
|
||||
|
||||
@ -1958,7 +1959,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
|
||||
|
||||
/*
|
||||
* If xmin isn't what we're expecting, the slot must have been
|
||||
* recycled and reused for an unrelated tuple. This implies that
|
||||
* recycled and reused for an unrelated tuple. This implies that
|
||||
* the latest version of the row was deleted, so we need do
|
||||
* nothing. (Should be safe to examine xmin without getting
|
||||
* buffer's content lock, since xmin never changes in an existing
|
||||
@ -2199,7 +2200,7 @@ EvalPlanQualGetTuple(EPQState *epqstate, Index rti)
|
||||
|
||||
/*
|
||||
* Fetch the current row values for any non-locked relations that need
|
||||
* to be scanned by an EvalPlanQual operation. origslot must have been set
|
||||
* to be scanned by an EvalPlanQual operation. origslot must have been set
|
||||
* to contain the current result row (top-level row) that we need to recheck.
|
||||
*/
|
||||
void
|
||||
@ -2428,7 +2429,7 @@ EvalPlanQualStart(EPQState *epqstate, EState *parentestate, Plan *planTree)
|
||||
|
||||
/*
|
||||
* Each EState must have its own es_epqScanDone state, but if we have
|
||||
* nested EPQ checks they should share es_epqTuple arrays. This allows
|
||||
* nested EPQ checks they should share es_epqTuple arrays. This allows
|
||||
* sub-rechecks to inherit the values being examined by an outer recheck.
|
||||
*/
|
||||
estate->es_epqScanDone = (bool *) palloc0(rtsize * sizeof(bool));
|
||||
@ -2485,7 +2486,7 @@ EvalPlanQualStart(EPQState *epqstate, EState *parentestate, Plan *planTree)
|
||||
*
|
||||
* This is a cut-down version of ExecutorEnd(); basically we want to do most
|
||||
* of the normal cleanup, but *not* close result relations (which we are
|
||||
* just sharing from the outer query). We do, however, have to close any
|
||||
* just sharing from the outer query). We do, however, have to close any
|
||||
* trigger target relations that got opened, since those are not shared.
|
||||
* (There probably shouldn't be any of the latter, but just in case...)
|
||||
*/
|
||||
|
@ -52,7 +52,7 @@
|
||||
* * ExecInitNode() notices that it is looking at a nest loop and
|
||||
* as the code below demonstrates, it calls ExecInitNestLoop().
|
||||
* Eventually this calls ExecInitNode() on the right and left subplans
|
||||
* and so forth until the entire plan is initialized. The result
|
||||
* and so forth until the entire plan is initialized. The result
|
||||
* of ExecInitNode() is a plan state tree built with the same structure
|
||||
* as the underlying plan tree.
|
||||
*
|
||||
@ -575,7 +575,7 @@ MultiExecProcNode(PlanState *node)
|
||||
* at 'node'.
|
||||
*
|
||||
* After this operation, the query plan will not be able to be
|
||||
* processed any further. This should be called only after
|
||||
* processed any further. This should be called only after
|
||||
* the query plan has been fully executed.
|
||||
* ----------------------------------------------------------------
|
||||
*/
|
||||
|
@ -26,7 +26,7 @@
|
||||
* ExecProject() is used to make tuple projections. Rather then
|
||||
* trying to speed it up, the execution plan should be pre-processed
|
||||
* to facilitate attribute sharing between nodes wherever possible,
|
||||
* instead of doing needless copying. -cim 5/31/91
|
||||
* instead of doing needless copying. -cim 5/31/91
|
||||
*
|
||||
* During expression evaluation, we check_stack_depth only in
|
||||
* ExecMakeFunctionResult (and substitute routines) rather than at every
|
||||
@ -201,7 +201,7 @@ static Datum ExecEvalCurrentOfExpr(ExprState *exprstate, ExprContext *econtext,
|
||||
*
|
||||
* Note: for notational simplicity we declare these functions as taking the
|
||||
* specific type of ExprState that they work on. This requires casting when
|
||||
* assigning the function pointer in ExecInitExpr. Be careful that the
|
||||
* assigning the function pointer in ExecInitExpr. Be careful that the
|
||||
* function signature is declared correctly, because the cast suppresses
|
||||
* automatic checking!
|
||||
*
|
||||
@ -236,7 +236,7 @@ static Datum ExecEvalCurrentOfExpr(ExprState *exprstate, ExprContext *econtext,
|
||||
* The caller should already have switched into the temporary memory
|
||||
* context econtext->ecxt_per_tuple_memory. The convenience entry point
|
||||
* ExecEvalExprSwitchContext() is provided for callers who don't prefer to
|
||||
* do the switch in an outer loop. We do not do the switch in these routines
|
||||
* do the switch in an outer loop. We do not do the switch in these routines
|
||||
* because it'd be a waste of cycles during nested expression evaluation.
|
||||
* ----------------------------------------------------------------
|
||||
*/
|
||||
@ -366,7 +366,7 @@ ExecEvalArrayRef(ArrayRefExprState *astate,
|
||||
* We might have a nested-assignment situation, in which the
|
||||
* refassgnexpr is itself a FieldStore or ArrayRef that needs to
|
||||
* obtain and modify the previous value of the array element or slice
|
||||
* being replaced. If so, we have to extract that value from the
|
||||
* being replaced. If so, we have to extract that value from the
|
||||
* array and pass it down via the econtext's caseValue. It's safe to
|
||||
* reuse the CASE mechanism because there cannot be a CASE between
|
||||
* here and where the value would be needed, and an array assignment
|
||||
@ -439,7 +439,7 @@ ExecEvalArrayRef(ArrayRefExprState *astate,
|
||||
/*
|
||||
* For assignment to varlena arrays, we handle a NULL original array
|
||||
* by substituting an empty (zero-dimensional) array; insertion of the
|
||||
* new element will result in a singleton array value. It does not
|
||||
* new element will result in a singleton array value. It does not
|
||||
* matter whether the new element is NULL.
|
||||
*/
|
||||
if (*isNull)
|
||||
@ -829,11 +829,11 @@ ExecEvalWholeRowVar(WholeRowVarExprState *wrvstate, ExprContext *econtext,
|
||||
* We really only care about numbers of attributes and data types.
|
||||
* Also, we can ignore type mismatch on columns that are dropped in
|
||||
* the destination type, so long as (1) the physical storage matches
|
||||
* or (2) the actual column value is NULL. Case (1) is helpful in
|
||||
* or (2) the actual column value is NULL. Case (1) is helpful in
|
||||
* some cases involving out-of-date cached plans, while case (2) is
|
||||
* expected behavior in situations such as an INSERT into a table with
|
||||
* dropped columns (the planner typically generates an INT4 NULL
|
||||
* regardless of the dropped column type). If we find a dropped
|
||||
* regardless of the dropped column type). If we find a dropped
|
||||
* column and cannot verify that case (1) holds, we have to use
|
||||
* ExecEvalWholeRowSlow to check (2) for each row.
|
||||
*/
|
||||
@ -1491,7 +1491,7 @@ ExecEvalFuncArgs(FunctionCallInfo fcinfo,
|
||||
* ExecPrepareTuplestoreResult
|
||||
*
|
||||
* Subroutine for ExecMakeFunctionResult: prepare to extract rows from a
|
||||
* tuplestore function result. We must set up a funcResultSlot (unless
|
||||
* tuplestore function result. We must set up a funcResultSlot (unless
|
||||
* already done in a previous call cycle) and verify that the function
|
||||
* returned the expected tuple descriptor.
|
||||
*/
|
||||
@ -1536,7 +1536,7 @@ ExecPrepareTuplestoreResult(FuncExprState *fcache,
|
||||
}
|
||||
|
||||
/*
|
||||
* If function provided a tupdesc, cross-check it. We only really need to
|
||||
* If function provided a tupdesc, cross-check it. We only really need to
|
||||
* do this for functions returning RECORD, but might as well do it always.
|
||||
*/
|
||||
if (resultDesc)
|
||||
@ -1719,7 +1719,7 @@ restart:
|
||||
if (fcache->func.fn_retset || hasSetArg)
|
||||
{
|
||||
/*
|
||||
* We need to return a set result. Complain if caller not ready to
|
||||
* We need to return a set result. Complain if caller not ready to
|
||||
* accept one.
|
||||
*/
|
||||
if (isDone == NULL)
|
||||
@ -2046,7 +2046,7 @@ ExecMakeTableFunctionResult(ExprState *funcexpr,
|
||||
/*
|
||||
* Normally the passed expression tree will be a FuncExprState, since the
|
||||
* grammar only allows a function call at the top level of a table
|
||||
* function reference. However, if the function doesn't return set then
|
||||
* function reference. However, if the function doesn't return set then
|
||||
* the planner might have replaced the function call via constant-folding
|
||||
* or inlining. So if we see any other kind of expression node, execute
|
||||
* it via the general ExecEvalExpr() code; the only difference is that we
|
||||
@ -2085,7 +2085,7 @@ ExecMakeTableFunctionResult(ExprState *funcexpr,
|
||||
*
|
||||
* Note: ideally, we'd do this in the per-tuple context, but then the
|
||||
* argument values would disappear when we reset the context in the
|
||||
* inner loop. So do it in caller context. Perhaps we should make a
|
||||
* inner loop. So do it in caller context. Perhaps we should make a
|
||||
* separate context just to hold the evaluated arguments?
|
||||
*/
|
||||
argDone = ExecEvalFuncArgs(&fcinfo, fcache->args, econtext);
|
||||
@ -2171,7 +2171,7 @@ ExecMakeTableFunctionResult(ExprState *funcexpr,
|
||||
* Can't do anything very useful with NULL rowtype values. For a
|
||||
* function returning set, we consider this a protocol violation
|
||||
* (but another alternative would be to just ignore the result and
|
||||
* "continue" to get another row). For a function not returning
|
||||
* "continue" to get another row). For a function not returning
|
||||
* set, we fall out of the loop; we'll cons up an all-nulls result
|
||||
* row below.
|
||||
*/
|
||||
@ -2305,7 +2305,7 @@ no_function_result:
|
||||
}
|
||||
|
||||
/*
|
||||
* If function provided a tupdesc, cross-check it. We only really need to
|
||||
* If function provided a tupdesc, cross-check it. We only really need to
|
||||
* do this for functions returning RECORD, but might as well do it always.
|
||||
*/
|
||||
if (rsinfo.setDesc)
|
||||
@ -2483,7 +2483,7 @@ ExecEvalDistinct(FuncExprState *fcache,
|
||||
*
|
||||
* Evaluate "scalar op ANY/ALL (array)". The operator always yields boolean,
|
||||
* and we combine the results across all array elements using OR and AND
|
||||
* (for ANY and ALL respectively). Of course we short-circuit as soon as
|
||||
* (for ANY and ALL respectively). Of course we short-circuit as soon as
|
||||
* the result is known.
|
||||
*/
|
||||
static Datum
|
||||
@ -2670,7 +2670,7 @@ ExecEvalScalarArrayOp(ScalarArrayOpExprState *sstate,
|
||||
* qualification to conjunctive normal form. If we ever get
|
||||
* an AND to evaluate, we can be sure that it's not a top-level
|
||||
* clause in the qualification, but appears lower (as a function
|
||||
* argument, for example), or in the target list. Not that you
|
||||
* argument, for example), or in the target list. Not that you
|
||||
* need to know this, mind you...
|
||||
* ----------------------------------------------------------------
|
||||
*/
|
||||
@ -2801,7 +2801,7 @@ ExecEvalAnd(BoolExprState *andExpr, ExprContext *econtext,
|
||||
/* ----------------------------------------------------------------
|
||||
* ExecEvalConvertRowtype
|
||||
*
|
||||
* Evaluate a rowtype coercion operation. This may require
|
||||
* Evaluate a rowtype coercion operation. This may require
|
||||
* rearranging field positions.
|
||||
* ----------------------------------------------------------------
|
||||
*/
|
||||
@ -2930,7 +2930,7 @@ ExecEvalCase(CaseExprState *caseExpr, ExprContext *econtext,
|
||||
|
||||
/*
|
||||
* if we have a true test, then we return the result, since the case
|
||||
* statement is satisfied. A NULL result from the test is not
|
||||
* statement is satisfied. A NULL result from the test is not
|
||||
* considered true.
|
||||
*/
|
||||
if (DatumGetBool(clause_value) && !*isNull)
|
||||
@ -3144,7 +3144,7 @@ ExecEvalArray(ArrayExprState *astate, ExprContext *econtext,
|
||||
* If all items were null or empty arrays, return an empty array;
|
||||
* otherwise, if some were and some weren't, raise error. (Note: we
|
||||
* must special-case this somehow to avoid trying to generate a 1-D
|
||||
* array formed from empty arrays. It's not ideal...)
|
||||
* array formed from empty arrays. It's not ideal...)
|
||||
*/
|
||||
if (haveempty)
|
||||
{
|
||||
@ -4315,7 +4315,7 @@ ExecEvalExprSwitchContext(ExprState *expression,
|
||||
* ExecInitExpr: prepare an expression tree for execution
|
||||
*
|
||||
* This function builds and returns an ExprState tree paralleling the given
|
||||
* Expr node tree. The ExprState tree can then be handed to ExecEvalExpr
|
||||
* Expr node tree. The ExprState tree can then be handed to ExecEvalExpr
|
||||
* for execution. Because the Expr tree itself is read-only as far as
|
||||
* ExecInitExpr and ExecEvalExpr are concerned, several different executions
|
||||
* of the same plan tree can occur concurrently.
|
||||
@ -4326,9 +4326,9 @@ ExecEvalExprSwitchContext(ExprState *expression,
|
||||
*
|
||||
* Any Aggref, WindowFunc, or SubPlan nodes found in the tree are added to the
|
||||
* lists of such nodes held by the parent PlanState. Otherwise, we do very
|
||||
* little initialization here other than building the state-node tree. Any
|
||||
* little initialization here other than building the state-node tree. Any
|
||||
* nontrivial work associated with initializing runtime info for a node should
|
||||
* happen during the first actual evaluation of that node. (This policy lets
|
||||
* happen during the first actual evaluation of that node. (This policy lets
|
||||
* us avoid work if the node is never actually evaluated.)
|
||||
*
|
||||
* Note: there is no ExecEndExpr function; we assume that any resource
|
||||
@ -5133,7 +5133,7 @@ ExecQual(List *qual, ExprContext *econtext, bool resultForNull)
|
||||
oldContext = MemoryContextSwitchTo(econtext->ecxt_per_tuple_memory);
|
||||
|
||||
/*
|
||||
* Evaluate the qual conditions one at a time. If we find a FALSE result,
|
||||
* Evaluate the qual conditions one at a time. If we find a FALSE result,
|
||||
* we can stop evaluating and return FALSE --- the AND result must be
|
||||
* FALSE. Also, if we find a NULL result when resultForNull is FALSE, we
|
||||
* can stop and return FALSE --- the AND result must be FALSE or NULL in
|
||||
@ -5292,7 +5292,7 @@ ExecTargetList(List *targetlist,
|
||||
else
|
||||
{
|
||||
/*
|
||||
* We have some done and some undone sets. Restart the done ones
|
||||
* We have some done and some undone sets. Restart the done ones
|
||||
* so that we can deliver a tuple (if possible).
|
||||
*/
|
||||
foreach(tl, targetlist)
|
||||
|
@ -30,7 +30,7 @@ static bool tlist_matches_tupdesc(PlanState *ps, List *tlist, Index varno, Tuple
|
||||
* ExecScanFetch -- fetch next potential tuple
|
||||
*
|
||||
* This routine is concerned with substituting a test tuple if we are
|
||||
* inside an EvalPlanQual recheck. If we aren't, just execute
|
||||
* inside an EvalPlanQual recheck. If we aren't, just execute
|
||||
* the access method's next-tuple routine.
|
||||
*/
|
||||
static inline TupleTableSlot *
|
||||
@ -155,7 +155,7 @@ ExecScan(ScanState *node,
|
||||
ResetExprContext(econtext);
|
||||
|
||||
/*
|
||||
* get a tuple from the access method. Loop until we obtain a tuple that
|
||||
* get a tuple from the access method. Loop until we obtain a tuple that
|
||||
* passes the qualification.
|
||||
*/
|
||||
for (;;)
|
||||
|
@ -4,7 +4,7 @@
|
||||
* Routines dealing with TupleTableSlots. These are used for resource
|
||||
* management associated with tuples (eg, releasing buffer pins for
|
||||
* tuples in disk buffers, or freeing the memory occupied by transient
|
||||
* tuples). Slots also provide access abstraction that lets us implement
|
||||
* tuples). Slots also provide access abstraction that lets us implement
|
||||
* "virtual" tuples to reduce data-copying overhead.
|
||||
*
|
||||
* Routines dealing with the type information for tuples. Currently,
|
||||
@ -261,7 +261,7 @@ ExecSetSlotDescriptor(TupleTableSlot *slot, /* slot to change */
|
||||
ExecClearTuple(slot);
|
||||
|
||||
/*
|
||||
* Release any old descriptor. Also release old Datum/isnull arrays if
|
||||
* Release any old descriptor. Also release old Datum/isnull arrays if
|
||||
* present (we don't bother to check if they could be re-used).
|
||||
*/
|
||||
if (slot->tts_tupleDescriptor)
|
||||
@ -311,7 +311,7 @@ ExecSetSlotDescriptor(TupleTableSlot *slot, /* slot to change */
|
||||
* Another case where it is 'false' is when the referenced tuple is held
|
||||
* in a tuple table slot belonging to a lower-level executor Proc node.
|
||||
* In this case the lower-level slot retains ownership and responsibility
|
||||
* for eventually releasing the tuple. When this method is used, we must
|
||||
* for eventually releasing the tuple. When this method is used, we must
|
||||
* be certain that the upper-level Proc node will lose interest in the tuple
|
||||
* sooner than the lower-level one does! If you're not certain, copy the
|
||||
* lower-level tuple with heap_copytuple and let the upper-level table
|
||||
@ -650,7 +650,7 @@ ExecFetchSlotTuple(TupleTableSlot *slot)
|
||||
* Fetch the slot's minimal physical tuple.
|
||||
*
|
||||
* If the slot contains a virtual tuple, we convert it to minimal
|
||||
* physical form. The slot retains ownership of the minimal tuple.
|
||||
* physical form. The slot retains ownership of the minimal tuple.
|
||||
* If it contains a regular tuple we convert to minimal form and store
|
||||
* that in addition to the regular tuple (not instead of, because
|
||||
* callers may hold pointers to Datums within the regular tuple).
|
||||
@ -829,7 +829,7 @@ ExecCopySlot(TupleTableSlot *dstslot, TupleTableSlot *srcslot)
|
||||
* ExecInit{Result,Scan,Extra}TupleSlot
|
||||
*
|
||||
* These are convenience routines to initialize the specified slot
|
||||
* in nodes inheriting the appropriate state. ExecInitExtraTupleSlot
|
||||
* in nodes inheriting the appropriate state. ExecInitExtraTupleSlot
|
||||
* is used for initializing special-purpose slots.
|
||||
* --------------------------------
|
||||
*/
|
||||
@ -1147,7 +1147,7 @@ BuildTupleFromCStrings(AttInMetadata *attinmeta, char **values)
|
||||
* code would have no way to obtain a tupledesc for the tuple.
|
||||
*
|
||||
* Note that if we do build a new tuple, it's palloc'd in the current
|
||||
* memory context. Beware of code that changes context between the initial
|
||||
* memory context. Beware of code that changes context between the initial
|
||||
* heap_form_tuple/etc call and calling HeapTuple(Header)GetDatum.
|
||||
*
|
||||
* For performance-critical callers, it could be worthwhile to take extra
|
||||
|
@ -105,7 +105,7 @@ CreateExecutorState(void)
|
||||
* Initialize all fields of the Executor State structure
|
||||
*/
|
||||
estate->es_direction = ForwardScanDirection;
|
||||
estate->es_snapshot = InvalidSnapshot; /* caller must initialize this */
|
||||
estate->es_snapshot = InvalidSnapshot; /* caller must initialize this */
|
||||
estate->es_crosscheck_snapshot = InvalidSnapshot; /* no crosscheck */
|
||||
estate->es_range_table = NIL;
|
||||
estate->es_plannedstmt = NULL;
|
||||
@ -342,7 +342,7 @@ CreateStandaloneExprContext(void)
|
||||
* any previously computed pass-by-reference expression result will go away!
|
||||
*
|
||||
* If isCommit is false, we are being called in error cleanup, and should
|
||||
* not call callbacks but only release memory. (It might be better to call
|
||||
* not call callbacks but only release memory. (It might be better to call
|
||||
* the callbacks and pass the isCommit flag to them, but that would require
|
||||
* more invasive code changes than currently seems justified.)
|
||||
*
|
||||
@ -371,7 +371,7 @@ FreeExprContext(ExprContext *econtext, bool isCommit)
|
||||
* ReScanExprContext
|
||||
*
|
||||
* Reset an expression context in preparation for a rescan of its
|
||||
* plan node. This requires calling any registered shutdown callbacks,
|
||||
* plan node. This requires calling any registered shutdown callbacks,
|
||||
* since any partially complete set-returning-functions must be canceled.
|
||||
*
|
||||
* Note we make no assumption about the caller's memory context.
|
||||
@ -412,7 +412,7 @@ MakePerTupleExprContext(EState *estate)
|
||||
/* ----------------
|
||||
* ExecAssignExprContext
|
||||
*
|
||||
* This initializes the ps_ExprContext field. It is only necessary
|
||||
* This initializes the ps_ExprContext field. It is only necessary
|
||||
* to do this for nodes which use ExecQual or ExecProject
|
||||
* because those routines require an econtext. Other nodes that
|
||||
* don't have to evaluate expressions don't need to do this.
|
||||
@ -458,7 +458,7 @@ ExecAssignResultTypeFromTL(PlanState *planstate)
|
||||
|
||||
/*
|
||||
* ExecTypeFromTL needs the parse-time representation of the tlist, not a
|
||||
* list of ExprStates. This is good because some plan nodes don't bother
|
||||
* list of ExprStates. This is good because some plan nodes don't bother
|
||||
* to set up planstate->targetlist ...
|
||||
*/
|
||||
tupDesc = ExecTypeFromTL(planstate->plan->targetlist, hasoid);
|
||||
@ -486,7 +486,7 @@ ExecGetResultType(PlanState *planstate)
|
||||
* the given tlist should be a list of ExprState nodes, not Expr nodes.
|
||||
*
|
||||
* inputDesc can be NULL, but if it is not, we check to see whether simple
|
||||
* Vars in the tlist match the descriptor. It is important to provide
|
||||
* Vars in the tlist match the descriptor. It is important to provide
|
||||
* inputDesc for relation-scan plan nodes, as a cross check that the relation
|
||||
* hasn't been changed since the plan was made. At higher levels of a plan,
|
||||
* there is no need to recheck.
|
||||
@ -692,7 +692,7 @@ ExecAssignProjectionInfo(PlanState *planstate,
|
||||
*
|
||||
* However ... there is no particular need to do it during ExecEndNode,
|
||||
* because FreeExecutorState will free any remaining ExprContexts within
|
||||
* the EState. Letting FreeExecutorState do it allows the ExprContexts to
|
||||
* the EState. Letting FreeExecutorState do it allows the ExprContexts to
|
||||
* be freed in reverse order of creation, rather than order of creation as
|
||||
* will happen if we delete them here, which saves O(N^2) work in the list
|
||||
* cleanup inside FreeExprContext.
|
||||
@ -712,7 +712,7 @@ ExecFreeExprContext(PlanState *planstate)
|
||||
* the following scan type support functions are for
|
||||
* those nodes which are stubborn and return tuples in
|
||||
* their Scan tuple slot instead of their Result tuple
|
||||
* slot.. luck fur us, these nodes do not do projections
|
||||
* slot.. luck fur us, these nodes do not do projections
|
||||
* so we don't have to worry about getting the ProjectionInfo
|
||||
* right for them... -cim 6/3/91
|
||||
* ----------------------------------------------------------------
|
||||
@ -1111,7 +1111,7 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
|
||||
/*
|
||||
* If the index has an associated exclusion constraint, check that.
|
||||
* This is simpler than the process for uniqueness checks since we
|
||||
* always insert first and then check. If the constraint is deferred,
|
||||
* always insert first and then check. If the constraint is deferred,
|
||||
* we check now anyway, but don't throw error on violation; instead
|
||||
* we'll queue a recheck event.
|
||||
*
|
||||
@ -1295,7 +1295,7 @@ retry:
|
||||
|
||||
/*
|
||||
* If an in-progress transaction is affecting the visibility of this
|
||||
* tuple, we need to wait for it to complete and then recheck. For
|
||||
* tuple, we need to wait for it to complete and then recheck. For
|
||||
* simplicity we do rechecking by just restarting the whole scan ---
|
||||
* this case probably doesn't happen often enough to be worth trying
|
||||
* harder, and anyway we don't want to hold any index internal locks
|
||||
@ -1357,7 +1357,7 @@ retry:
|
||||
|
||||
/*
|
||||
* Check existing tuple's index values to see if it really matches the
|
||||
* exclusion condition against the new_values. Returns true if conflict.
|
||||
* exclusion condition against the new_values. Returns true if conflict.
|
||||
*/
|
||||
static bool
|
||||
index_recheck_constraint(Relation index, Oid *constr_procs,
|
||||
|
@ -47,7 +47,7 @@ typedef struct
|
||||
} DR_sqlfunction;
|
||||
|
||||
/*
|
||||
* We have an execution_state record for each query in a function. Each
|
||||
* We have an execution_state record for each query in a function. Each
|
||||
* record contains a plantree for its query. If the query is currently in
|
||||
* F_EXEC_RUN state then there's a QueryDesc too.
|
||||
*
|
||||
@ -466,7 +466,7 @@ sql_fn_resolve_param_name(SQLFunctionParseInfoPtr pinfo,
|
||||
* Set up the per-query execution_state records for a SQL function.
|
||||
*
|
||||
* The input is a List of Lists of parsed and rewritten, but not planned,
|
||||
* querytrees. The sublist structure denotes the original query boundaries.
|
||||
* querytrees. The sublist structure denotes the original query boundaries.
|
||||
*/
|
||||
static List *
|
||||
init_execution_state(List *queryTree_list,
|
||||
@ -590,7 +590,7 @@ init_sql_fcache(FmgrInfo *finfo, Oid collation, bool lazyEvalOK)
|
||||
bool isNull;
|
||||
|
||||
/*
|
||||
* Create memory context that holds all the SQLFunctionCache data. It
|
||||
* Create memory context that holds all the SQLFunctionCache data. It
|
||||
* must be a child of whatever context holds the FmgrInfo.
|
||||
*/
|
||||
fcontext = AllocSetContextCreate(finfo->fn_mcxt,
|
||||
@ -602,7 +602,7 @@ init_sql_fcache(FmgrInfo *finfo, Oid collation, bool lazyEvalOK)
|
||||
oldcontext = MemoryContextSwitchTo(fcontext);
|
||||
|
||||
/*
|
||||
* Create the struct proper, link it to fcontext and fn_extra. Once this
|
||||
* Create the struct proper, link it to fcontext and fn_extra. Once this
|
||||
* is done, we'll be able to recover the memory after failure, even if the
|
||||
* FmgrInfo is long-lived.
|
||||
*/
|
||||
@ -672,7 +672,7 @@ init_sql_fcache(FmgrInfo *finfo, Oid collation, bool lazyEvalOK)
|
||||
fcache->src = TextDatumGetCString(tmp);
|
||||
|
||||
/*
|
||||
* Parse and rewrite the queries in the function text. Use sublists to
|
||||
* Parse and rewrite the queries in the function text. Use sublists to
|
||||
* keep track of the original query boundaries. But we also build a
|
||||
* "flat" list of the rewritten queries to pass to check_sql_fn_retval.
|
||||
* This is because the last canSetTag query determines the result type
|
||||
@ -712,7 +712,7 @@ init_sql_fcache(FmgrInfo *finfo, Oid collation, bool lazyEvalOK)
|
||||
* any polymorphic arguments.
|
||||
*
|
||||
* Note: we set fcache->returnsTuple according to whether we are returning
|
||||
* the whole tuple result or just a single column. In the latter case we
|
||||
* the whole tuple result or just a single column. In the latter case we
|
||||
* clear returnsTuple because we need not act different from the scalar
|
||||
* result case, even if it's a rowtype column. (However, we have to force
|
||||
* lazy eval mode in that case; otherwise we'd need extra code to expand
|
||||
@ -944,7 +944,7 @@ postquel_get_single_result(TupleTableSlot *slot,
|
||||
/*
|
||||
* Set up to return the function value. For pass-by-reference datatypes,
|
||||
* be sure to allocate the result in resultcontext, not the current memory
|
||||
* context (which has query lifespan). We can't leave the data in the
|
||||
* context (which has query lifespan). We can't leave the data in the
|
||||
* TupleTableSlot because we intend to clear the slot before returning.
|
||||
*/
|
||||
oldcontext = MemoryContextSwitchTo(resultcontext);
|
||||
@ -1052,7 +1052,7 @@ fmgr_sql(PG_FUNCTION_ARGS)
|
||||
/*
|
||||
* Switch to context in which the fcache lives. This ensures that our
|
||||
* tuplestore etc will have sufficient lifetime. The sub-executor is
|
||||
* responsible for deleting per-tuple information. (XXX in the case of a
|
||||
* responsible for deleting per-tuple information. (XXX in the case of a
|
||||
* long-lived FmgrInfo, this policy represents more memory leakage, but
|
||||
* it's not entirely clear where to keep stuff instead.)
|
||||
*/
|
||||
@ -1106,7 +1106,7 @@ fmgr_sql(PG_FUNCTION_ARGS)
|
||||
* suspend execution before completion is if we are returning a row from a
|
||||
* lazily-evaluated SELECT. So, when first entering this loop, we'll
|
||||
* either start a new query (and push a fresh snapshot) or re-establish
|
||||
* the active snapshot from the existing query descriptor. If we need to
|
||||
* the active snapshot from the existing query descriptor. If we need to
|
||||
* start a new query in a subsequent execution of the loop, either we need
|
||||
* a fresh snapshot (and pushed_snapshot is false) or the existing
|
||||
* snapshot is on the active stack and we can just bump its command ID.
|
||||
@ -1162,7 +1162,7 @@ fmgr_sql(PG_FUNCTION_ARGS)
|
||||
* Break from loop if we didn't shut down (implying we got a
|
||||
* lazily-evaluated row). Otherwise we'll press on till the whole
|
||||
* function is done, relying on the tuplestore to keep hold of the
|
||||
* data to eventually be returned. This is necessary since an
|
||||
* data to eventually be returned. This is necessary since an
|
||||
* INSERT/UPDATE/DELETE RETURNING that sets the result might be
|
||||
* followed by additional rule-inserted commands, and we want to
|
||||
* finish doing all those commands before we return anything.
|
||||
@ -1184,7 +1184,7 @@ fmgr_sql(PG_FUNCTION_ARGS)
|
||||
|
||||
/*
|
||||
* Flush the current snapshot so that we will take a new one for
|
||||
* the new query list. This ensures that new snaps are taken at
|
||||
* the new query list. This ensures that new snaps are taken at
|
||||
* original-query boundaries, matching the behavior of interactive
|
||||
* execution.
|
||||
*/
|
||||
@ -1242,7 +1242,7 @@ fmgr_sql(PG_FUNCTION_ARGS)
|
||||
else if (fcache->lazyEval)
|
||||
{
|
||||
/*
|
||||
* We are done with a lazy evaluation. Clean up.
|
||||
* We are done with a lazy evaluation. Clean up.
|
||||
*/
|
||||
tuplestore_clear(fcache->tstore);
|
||||
|
||||
@ -1266,8 +1266,8 @@ fmgr_sql(PG_FUNCTION_ARGS)
|
||||
else
|
||||
{
|
||||
/*
|
||||
* We are done with a non-lazy evaluation. Return whatever is in
|
||||
* the tuplestore. (It is now caller's responsibility to free the
|
||||
* We are done with a non-lazy evaluation. Return whatever is in
|
||||
* the tuplestore. (It is now caller's responsibility to free the
|
||||
* tuplestore when done.)
|
||||
*/
|
||||
rsi->returnMode = SFRM_Materialize;
|
||||
@ -1379,7 +1379,7 @@ sql_exec_error_callback(void *arg)
|
||||
|
||||
/*
|
||||
* Try to determine where in the function we failed. If there is a query
|
||||
* with non-null QueryDesc, finger it. (We check this rather than looking
|
||||
* with non-null QueryDesc, finger it. (We check this rather than looking
|
||||
* for F_EXEC_RUN state, so that errors during ExecutorStart or
|
||||
* ExecutorEnd are blamed on the appropriate query; see postquel_start and
|
||||
* postquel_end.)
|
||||
@ -1671,7 +1671,7 @@ check_sql_fn_retval(Oid func_id, Oid rettype, List *queryTreeList,
|
||||
* the function that's calling it.
|
||||
*
|
||||
* XXX Note that if rettype is RECORD, the IsBinaryCoercible check
|
||||
* will succeed for any composite restype. For the moment we rely on
|
||||
* will succeed for any composite restype. For the moment we rely on
|
||||
* runtime type checking to catch any discrepancy, but it'd be nice to
|
||||
* do better at parse time.
|
||||
*/
|
||||
@ -1717,7 +1717,7 @@ check_sql_fn_retval(Oid func_id, Oid rettype, List *queryTreeList,
|
||||
/*
|
||||
* Verify that the targetlist matches the return tuple type. We scan
|
||||
* the non-deleted attributes to ensure that they match the datatypes
|
||||
* of the non-resjunk columns. For deleted attributes, insert NULL
|
||||
* of the non-resjunk columns. For deleted attributes, insert NULL
|
||||
* result columns if the caller asked for that.
|
||||
*/
|
||||
tupnatts = tupdesc->natts;
|
||||
|
@ -25,7 +25,7 @@
|
||||
* The agg's first input type and transtype must be the same in this case!
|
||||
*
|
||||
* If transfunc is marked "strict" then NULL input_values are skipped,
|
||||
* keeping the previous transvalue. If transfunc is not strict then it
|
||||
* keeping the previous transvalue. If transfunc is not strict then it
|
||||
* is called for every input tuple and must deal with NULL initcond
|
||||
* or NULL input_values for itself.
|
||||
*
|
||||
@ -66,7 +66,7 @@
|
||||
* it is completely forbidden for functions to modify pass-by-ref inputs,
|
||||
* but in the aggregate case we know the left input is either the initial
|
||||
* transition value or a previous function result, and in either case its
|
||||
* value need not be preserved. See int8inc() for an example. Notice that
|
||||
* value need not be preserved. See int8inc() for an example. Notice that
|
||||
* advance_transition_function() is coded to avoid a data copy step when
|
||||
* the previous transition value pointer is returned. Also, some
|
||||
* transition functions want to store working state in addition to the
|
||||
@ -132,14 +132,14 @@ typedef struct AggStatePerAggData
|
||||
Aggref *aggref;
|
||||
|
||||
/*
|
||||
* Nominal number of arguments for aggregate function. For plain aggs,
|
||||
* this excludes any ORDER BY expressions. For ordered-set aggs, this
|
||||
* Nominal number of arguments for aggregate function. For plain aggs,
|
||||
* this excludes any ORDER BY expressions. For ordered-set aggs, this
|
||||
* counts both the direct and aggregated (ORDER BY) arguments.
|
||||
*/
|
||||
int numArguments;
|
||||
|
||||
/*
|
||||
* Number of aggregated input columns. This includes ORDER BY expressions
|
||||
* Number of aggregated input columns. This includes ORDER BY expressions
|
||||
* in both the plain-agg and ordered-set cases. Ordered-set direct args
|
||||
* are not counted, though.
|
||||
*/
|
||||
@ -153,7 +153,7 @@ typedef struct AggStatePerAggData
|
||||
int numTransInputs;
|
||||
|
||||
/*
|
||||
* Number of arguments to pass to the finalfn. This is always at least 1
|
||||
* Number of arguments to pass to the finalfn. This is always at least 1
|
||||
* (the transition state value) plus any ordered-set direct args. If the
|
||||
* finalfn wants extra args then we pass nulls corresponding to the
|
||||
* aggregated input columns.
|
||||
@ -216,7 +216,7 @@ typedef struct AggStatePerAggData
|
||||
transtypeByVal;
|
||||
|
||||
/*
|
||||
* Stuff for evaluation of inputs. We used to just use ExecEvalExpr, but
|
||||
* Stuff for evaluation of inputs. We used to just use ExecEvalExpr, but
|
||||
* with the addition of ORDER BY we now need at least a slot for passing
|
||||
* data to the sort object, which requires a tupledesc, so we might as
|
||||
* well go whole hog and use ExecProject too.
|
||||
@ -236,7 +236,7 @@ typedef struct AggStatePerAggData
|
||||
* input tuple group and updated for each input tuple.
|
||||
*
|
||||
* For a simple (non DISTINCT/ORDER BY) aggregate, we just feed the input
|
||||
* values straight to the transition function. If it's DISTINCT or
|
||||
* values straight to the transition function. If it's DISTINCT or
|
||||
* requires ORDER BY, we pass the input values into a Tuplesort object;
|
||||
* then at completion of the input tuple group, we scan the sorted values,
|
||||
* eliminate duplicates if needed, and run the transition function on the
|
||||
@ -279,7 +279,7 @@ typedef struct AggStatePerGroupData
|
||||
|
||||
/*
|
||||
* Note: noTransValue initially has the same value as transValueIsNull,
|
||||
* and if true both are cleared to false at the same time. They are not
|
||||
* and if true both are cleared to false at the same time. They are not
|
||||
* the same though: if transfn later returns a NULL, we want to keep that
|
||||
* NULL and not auto-replace it with a later input value. Only the first
|
||||
* non-NULL input will be auto-substituted.
|
||||
@ -289,7 +289,7 @@ typedef struct AggStatePerGroupData
|
||||
/*
|
||||
* To implement hashed aggregation, we need a hashtable that stores a
|
||||
* representative tuple and an array of AggStatePerGroup structs for each
|
||||
* distinct set of GROUP BY column values. We compute the hash key from
|
||||
* distinct set of GROUP BY column values. We compute the hash key from
|
||||
* the GROUP BY columns.
|
||||
*/
|
||||
typedef struct AggHashEntryData *AggHashEntry;
|
||||
@ -416,7 +416,7 @@ initialize_aggregates(AggState *aggstate,
|
||||
*
|
||||
* The new values (and null flags) have been preloaded into argument positions
|
||||
* 1 and up in peraggstate->transfn_fcinfo, so that we needn't copy them again
|
||||
* to pass to the transition function. We also expect that the static fields
|
||||
* to pass to the transition function. We also expect that the static fields
|
||||
* of the fcinfo are already initialized; that was done by ExecInitAgg().
|
||||
*
|
||||
* It doesn't matter which memory context this is called in.
|
||||
@ -495,7 +495,7 @@ advance_transition_function(AggState *aggstate,
|
||||
|
||||
/*
|
||||
* If pass-by-ref datatype, must copy the new value into aggcontext and
|
||||
* pfree the prior transValue. But if transfn returned a pointer to its
|
||||
* pfree the prior transValue. But if transfn returned a pointer to its
|
||||
* first input, we don't need to do anything.
|
||||
*/
|
||||
if (!peraggstate->transtypeByVal &&
|
||||
@ -519,7 +519,7 @@ advance_transition_function(AggState *aggstate,
|
||||
}
|
||||
|
||||
/*
|
||||
* Advance all the aggregates for one input tuple. The input tuple
|
||||
* Advance all the aggregates for one input tuple. The input tuple
|
||||
* has been stored in tmpcontext->ecxt_outertuple, so that it is accessible
|
||||
* to ExecEvalExpr. pergroup is the array of per-group structs to use
|
||||
* (this might be in a hashtable entry).
|
||||
@ -609,7 +609,7 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
|
||||
/*
|
||||
* Run the transition function for a DISTINCT or ORDER BY aggregate
|
||||
* with only one input. This is called after we have completed
|
||||
* entering all the input values into the sort object. We complete the
|
||||
* entering all the input values into the sort object. We complete the
|
||||
* sort, read out the values in sorted order, and run the transition
|
||||
* function on each value (applying DISTINCT if appropriate).
|
||||
*
|
||||
@ -705,7 +705,7 @@ process_ordered_aggregate_single(AggState *aggstate,
|
||||
/*
|
||||
* Run the transition function for a DISTINCT or ORDER BY aggregate
|
||||
* with more than one input. This is called after we have completed
|
||||
* entering all the input values into the sort object. We complete the
|
||||
* entering all the input values into the sort object. We complete the
|
||||
* sort, read out the values in sorted order, and run the transition
|
||||
* function on each value (applying DISTINCT if appropriate).
|
||||
*
|
||||
@ -1070,9 +1070,9 @@ lookup_hash_entry(AggState *aggstate, TupleTableSlot *inputslot)
|
||||
* the appropriate attribute for each aggregate function use (Aggref
|
||||
* node) appearing in the targetlist or qual of the node. The number
|
||||
* of tuples to aggregate over depends on whether grouped or plain
|
||||
* aggregation is selected. In grouped aggregation, we produce a result
|
||||
* aggregation is selected. In grouped aggregation, we produce a result
|
||||
* row for each group; in plain aggregation there's a single result row
|
||||
* for the whole query. In either case, the value of each aggregate is
|
||||
* for the whole query. In either case, the value of each aggregate is
|
||||
* stored in the expression context to be used when ExecProject evaluates
|
||||
* the result tuple.
|
||||
*/
|
||||
@ -1097,7 +1097,7 @@ ExecAgg(AggState *node)
|
||||
}
|
||||
|
||||
/*
|
||||
* Exit if nothing left to do. (We must do the ps_TupFromTlist check
|
||||
* Exit if nothing left to do. (We must do the ps_TupFromTlist check
|
||||
* first, because in some cases agg_done gets set before we emit the final
|
||||
* aggregate tuple, and we have to finish running SRFs for it.)
|
||||
*/
|
||||
@ -1181,11 +1181,11 @@ agg_retrieve_direct(AggState *aggstate)
|
||||
/*
|
||||
* Clear the per-output-tuple context for each group, as well as
|
||||
* aggcontext (which contains any pass-by-ref transvalues of the old
|
||||
* group). We also clear any child contexts of the aggcontext; some
|
||||
* group). We also clear any child contexts of the aggcontext; some
|
||||
* aggregate functions store working state in such contexts.
|
||||
*
|
||||
* We use ReScanExprContext not just ResetExprContext because we want
|
||||
* any registered shutdown callbacks to be called. That allows
|
||||
* any registered shutdown callbacks to be called. That allows
|
||||
* aggregate functions to ensure they've cleaned up any non-memory
|
||||
* resources.
|
||||
*/
|
||||
@ -1518,8 +1518,8 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
|
||||
aggstate->hashtable = NULL;
|
||||
|
||||
/*
|
||||
* Create expression contexts. We need two, one for per-input-tuple
|
||||
* processing and one for per-output-tuple processing. We cheat a little
|
||||
* Create expression contexts. We need two, one for per-input-tuple
|
||||
* processing and one for per-output-tuple processing. We cheat a little
|
||||
* by using ExecAssignExprContext() to build both.
|
||||
*/
|
||||
ExecAssignExprContext(estate, &aggstate->ss.ps);
|
||||
@ -1552,7 +1552,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
|
||||
* initialize child expressions
|
||||
*
|
||||
* Note: ExecInitExpr finds Aggrefs for us, and also checks that no aggs
|
||||
* contain other agg calls in their arguments. This would make no sense
|
||||
* contain other agg calls in their arguments. This would make no sense
|
||||
* under SQL semantics anyway (and it's forbidden by the spec). Because
|
||||
* that is true, we don't need to worry about evaluating the aggs in any
|
||||
* particular order.
|
||||
@ -1599,7 +1599,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
|
||||
* This is not an error condition: we might be using the Agg node just
|
||||
* to do hash-based grouping. Even in the regular case,
|
||||
* constant-expression simplification could optimize away all of the
|
||||
* Aggrefs in the targetlist and qual. So keep going, but force local
|
||||
* Aggrefs in the targetlist and qual. So keep going, but force local
|
||||
* copy of numaggs positive so that palloc()s below don't choke.
|
||||
*/
|
||||
numaggs = 1;
|
||||
@ -1760,7 +1760,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
|
||||
}
|
||||
|
||||
/*
|
||||
* Get actual datatypes of the (nominal) aggregate inputs. These
|
||||
* Get actual datatypes of the (nominal) aggregate inputs. These
|
||||
* could be different from the agg's declared input types, when the
|
||||
* agg accepts ANY or a polymorphic type.
|
||||
*/
|
||||
@ -1852,7 +1852,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
|
||||
* If the transfn is strict and the initval is NULL, make sure input
|
||||
* type and transtype are the same (or at least binary-compatible), so
|
||||
* that it's OK to use the first aggregated input value as the initial
|
||||
* transValue. This should have been checked at agg definition time,
|
||||
* transValue. This should have been checked at agg definition time,
|
||||
* but we must check again in case the transfn's strictness property
|
||||
* has been changed.
|
||||
*/
|
||||
@ -1885,7 +1885,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
|
||||
/*
|
||||
* If we're doing either DISTINCT or ORDER BY for a plain agg, then we
|
||||
* have a list of SortGroupClause nodes; fish out the data in them and
|
||||
* stick them into arrays. We ignore ORDER BY for an ordered-set agg,
|
||||
* stick them into arrays. We ignore ORDER BY for an ordered-set agg,
|
||||
* however; the agg's transfn and finalfn are responsible for that.
|
||||
*
|
||||
* Note that by construction, if there is a DISTINCT clause then the
|
||||
@ -2144,8 +2144,8 @@ ExecReScanAgg(AggState *node)
|
||||
*
|
||||
* The transition and/or final functions of an aggregate may want to verify
|
||||
* that they are being called as aggregates, rather than as plain SQL
|
||||
* functions. They should use this function to do so. The return value
|
||||
* is nonzero if being called as an aggregate, or zero if not. (Specific
|
||||
* functions. They should use this function to do so. The return value
|
||||
* is nonzero if being called as an aggregate, or zero if not. (Specific
|
||||
* nonzero values are AGG_CONTEXT_AGGREGATE or AGG_CONTEXT_WINDOW, but more
|
||||
* values could conceivably appear in future.)
|
||||
*
|
||||
|
@ -33,7 +33,7 @@
|
||||
* /
|
||||
* Append -------+------+------+--- nil
|
||||
* / \ | | |
|
||||
* nil nil ... ... ...
|
||||
* nil nil ... ... ...
|
||||
* subplans
|
||||
*
|
||||
* Append nodes are currently used for unions, and to support
|
||||
|
@ -5,7 +5,7 @@
|
||||
*
|
||||
* NOTE: it is critical that this plan type only be used with MVCC-compliant
|
||||
* snapshots (ie, regular snapshots, not SnapshotAny or one of the other
|
||||
* special snapshots). The reason is that since index and heap scans are
|
||||
* special snapshots). The reason is that since index and heap scans are
|
||||
* decoupled, there can be no assurance that the index tuple prompting a
|
||||
* visit to a particular heap TID still exists when the visit is made.
|
||||
* Therefore the tuple might not exist anymore either (which is OK because
|
||||
@ -340,7 +340,7 @@ bitgetpage(HeapScanDesc scan, TBMIterateResult *tbmres)
|
||||
|
||||
/*
|
||||
* We must hold share lock on the buffer content while examining tuple
|
||||
* visibility. Afterwards, however, the tuples we have found to be
|
||||
* visibility. Afterwards, however, the tuples we have found to be
|
||||
* visible are guaranteed good as long as we hold the buffer pin.
|
||||
*/
|
||||
LockBuffer(buffer, BUFFER_LOCK_SHARE);
|
||||
|
@ -147,7 +147,7 @@ ExecInitForeignScan(ForeignScan *node, EState *estate, int eflags)
|
||||
scanstate->ss.ss_currentRelation = currentRelation;
|
||||
|
||||
/*
|
||||
* get the scan type from the relation descriptor. (XXX at some point we
|
||||
* get the scan type from the relation descriptor. (XXX at some point we
|
||||
* might want to let the FDW editorialize on the scan tupdesc.)
|
||||
*/
|
||||
ExecAssignScanType(&scanstate->ss, RelationGetDescr(currentRelation));
|
||||
|
@ -232,7 +232,7 @@ FunctionNext(FunctionScanState *node)
|
||||
}
|
||||
|
||||
/*
|
||||
* If alldone, we just return the previously-cleared scanslot. Otherwise,
|
||||
* If alldone, we just return the previously-cleared scanslot. Otherwise,
|
||||
* finish creating the virtual tuple.
|
||||
*/
|
||||
if (!alldone)
|
||||
@ -449,8 +449,8 @@ ExecInitFunctionScan(FunctionScan *node, EState *estate, int eflags)
|
||||
* Create the combined TupleDesc
|
||||
*
|
||||
* If there is just one function without ordinality, the scan result
|
||||
* tupdesc is the same as the function result tupdesc --- except that
|
||||
* we may stuff new names into it below, so drop any rowtype label.
|
||||
* tupdesc is the same as the function result tupdesc --- except that we
|
||||
* may stuff new names into it below, so drop any rowtype label.
|
||||
*/
|
||||
if (scanstate->simple)
|
||||
{
|
||||
|
@ -365,7 +365,7 @@ ExecHashTableCreate(Hash *node, List *hashOperators, bool keepNulls)
|
||||
|
||||
/*
|
||||
* Set up for skew optimization, if possible and there's a need for more
|
||||
* than one batch. (In a one-batch join, there's no point in it.)
|
||||
* than one batch. (In a one-batch join, there's no point in it.)
|
||||
*/
|
||||
if (nbatch > 1)
|
||||
ExecHashBuildSkewHash(hashtable, node, num_skew_mcvs);
|
||||
@ -407,7 +407,7 @@ ExecChooseHashTableSize(double ntuples, int tupwidth, bool useskew,
|
||||
|
||||
/*
|
||||
* Estimate tupsize based on footprint of tuple in hashtable... note this
|
||||
* does not allow for any palloc overhead. The manipulations of spaceUsed
|
||||
* does not allow for any palloc overhead. The manipulations of spaceUsed
|
||||
* don't count palloc overhead either.
|
||||
*/
|
||||
tupsize = HJTUPLE_OVERHEAD +
|
||||
@ -459,7 +459,7 @@ ExecChooseHashTableSize(double ntuples, int tupwidth, bool useskew,
|
||||
/*
|
||||
* Set nbuckets to achieve an average bucket load of NTUP_PER_BUCKET when
|
||||
* memory is filled. Set nbatch to the smallest power of 2 that appears
|
||||
* sufficient. The Min() steps limit the results so that the pointer
|
||||
* sufficient. The Min() steps limit the results so that the pointer
|
||||
* arrays we'll try to allocate do not exceed work_mem.
|
||||
*/
|
||||
max_pointers = (work_mem * 1024L) / sizeof(void *);
|
||||
@ -498,8 +498,8 @@ ExecChooseHashTableSize(double ntuples, int tupwidth, bool useskew,
|
||||
|
||||
/*
|
||||
* Both nbuckets and nbatch must be powers of 2 to make
|
||||
* ExecHashGetBucketAndBatch fast. We already fixed nbatch; now inflate
|
||||
* nbuckets to the next larger power of 2. We also force nbuckets to not
|
||||
* ExecHashGetBucketAndBatch fast. We already fixed nbatch; now inflate
|
||||
* nbuckets to the next larger power of 2. We also force nbuckets to not
|
||||
* be real small, by starting the search at 2^10. (Note: above we made
|
||||
* sure that nbuckets is not more than INT_MAX / 2, so this loop cannot
|
||||
* overflow, nor can the final shift to recalculate nbuckets.)
|
||||
@ -817,7 +817,7 @@ ExecHashGetHashValue(HashJoinTable hashtable,
|
||||
* the hash support function as strict even if the operator is not.
|
||||
*
|
||||
* Note: currently, all hashjoinable operators must be strict since
|
||||
* the hash index AM assumes that. However, it takes so little extra
|
||||
* the hash index AM assumes that. However, it takes so little extra
|
||||
* code here to allow non-strict that we may as well do it.
|
||||
*/
|
||||
if (isNull)
|
||||
@ -1237,7 +1237,7 @@ ExecHashBuildSkewHash(HashJoinTable hashtable, Hash *node, int mcvsToUse)
|
||||
/*
|
||||
* While we have not hit a hole in the hashtable and have not hit
|
||||
* the desired bucket, we have collided with some previous hash
|
||||
* value, so try the next bucket location. NB: this code must
|
||||
* value, so try the next bucket location. NB: this code must
|
||||
* match ExecHashGetSkewBucket.
|
||||
*/
|
||||
bucket = hashvalue & (nbuckets - 1);
|
||||
@ -1435,7 +1435,7 @@ ExecHashRemoveNextSkewBucket(HashJoinTable hashtable)
|
||||
* NOTE: this is not nearly as simple as it looks on the surface, because
|
||||
* of the possibility of collisions in the hashtable. Suppose that hash
|
||||
* values A and B collide at a particular hashtable entry, and that A was
|
||||
* entered first so B gets shifted to a different table entry. If we were
|
||||
* entered first so B gets shifted to a different table entry. If we were
|
||||
* to remove A first then ExecHashGetSkewBucket would mistakenly start
|
||||
* reporting that B is not in the hashtable, because it would hit the NULL
|
||||
* before finding B. However, we always remove entries in the reverse
|
||||
|
@ -126,7 +126,7 @@ ExecHashJoin(HashJoinState *node)
|
||||
* check this when the outer relation's startup cost is less
|
||||
* than the projected cost of building the hash table.
|
||||
* Otherwise it's best to build the hash table first and see
|
||||
* if the inner relation is empty. (When it's a left join, we
|
||||
* if the inner relation is empty. (When it's a left join, we
|
||||
* should always make this check, since we aren't going to be
|
||||
* able to skip the join on the strength of an empty inner
|
||||
* relation anyway.)
|
||||
@ -530,7 +530,7 @@ ExecInitHashJoin(HashJoin *node, EState *estate, int eflags)
|
||||
* tuple slot of the Hash node (which is our inner plan). we can do this
|
||||
* because Hash nodes don't return tuples via ExecProcNode() -- instead
|
||||
* the hash join node uses ExecScanHashBucket() to get at the contents of
|
||||
* the hash table. -cim 6/9/91
|
||||
* the hash table. -cim 6/9/91
|
||||
*/
|
||||
{
|
||||
HashState *hashstate = (HashState *) innerPlanState(hjstate);
|
||||
@ -896,7 +896,7 @@ ExecHashJoinSaveTuple(MinimalTuple tuple, uint32 hashvalue,
|
||||
|
||||
/*
|
||||
* ExecHashJoinGetSavedTuple
|
||||
* read the next tuple from a batch file. Return NULL if no more.
|
||||
* read the next tuple from a batch file. Return NULL if no more.
|
||||
*
|
||||
* On success, *hashvalue is set to the tuple's hash value, and the tuple
|
||||
* itself is stored in the given slot.
|
||||
|
@ -88,7 +88,7 @@ IndexOnlyNext(IndexOnlyScanState *node)
|
||||
* Note on Memory Ordering Effects: visibilitymap_test does not lock
|
||||
* the visibility map buffer, and therefore the result we read here
|
||||
* could be slightly stale. However, it can't be stale enough to
|
||||
* matter. It suffices to show that (1) there is a read barrier
|
||||
* matter. It suffices to show that (1) there is a read barrier
|
||||
* between the time we read the index TID and the time we test the
|
||||
* visibility map; and (2) there is a write barrier between the time
|
||||
* some other concurrent process clears the visibility map bit and the
|
||||
@ -113,7 +113,7 @@ IndexOnlyNext(IndexOnlyScanState *node)
|
||||
/*
|
||||
* Only MVCC snapshots are supported here, so there should be no
|
||||
* need to keep following the HOT chain once a visible entry has
|
||||
* been found. If we did want to allow that, we'd need to keep
|
||||
* been found. If we did want to allow that, we'd need to keep
|
||||
* more state to remember not to call index_getnext_tid next time.
|
||||
*/
|
||||
if (scandesc->xs_continue_hot)
|
||||
@ -122,7 +122,7 @@ IndexOnlyNext(IndexOnlyScanState *node)
|
||||
/*
|
||||
* Note: at this point we are holding a pin on the heap page, as
|
||||
* recorded in scandesc->xs_cbuf. We could release that pin now,
|
||||
* but it's not clear whether it's a win to do so. The next index
|
||||
* but it's not clear whether it's a win to do so. The next index
|
||||
* entry might require a visit to the same heap page.
|
||||
*/
|
||||
}
|
||||
|
@ -216,7 +216,7 @@ ExecIndexEvalRuntimeKeys(ExprContext *econtext,
|
||||
|
||||
/*
|
||||
* For each run-time key, extract the run-time expression and evaluate
|
||||
* it with respect to the current context. We then stick the result
|
||||
* it with respect to the current context. We then stick the result
|
||||
* into the proper scan key.
|
||||
*
|
||||
* Note: the result of the eval could be a pass-by-ref value that's
|
||||
@ -349,7 +349,7 @@ ExecIndexAdvanceArrayKeys(IndexArrayKeyInfo *arrayKeys, int numArrayKeys)
|
||||
/*
|
||||
* Note we advance the rightmost array key most quickly, since it will
|
||||
* correspond to the lowest-order index column among the available
|
||||
* qualifications. This is hypothesized to result in better locality of
|
||||
* qualifications. This is hypothesized to result in better locality of
|
||||
* access in the index.
|
||||
*/
|
||||
for (j = numArrayKeys - 1; j >= 0; j--)
|
||||
|
@ -113,7 +113,7 @@ ExecLimit(LimitState *node)
|
||||
|
||||
/*
|
||||
* The subplan is known to return no tuples (or not more than
|
||||
* OFFSET tuples, in general). So we return no tuples.
|
||||
* OFFSET tuples, in general). So we return no tuples.
|
||||
*/
|
||||
return NULL;
|
||||
|
||||
|
@ -182,7 +182,7 @@ lnext:
|
||||
tuple.t_self = copyTuple->t_self;
|
||||
|
||||
/*
|
||||
* Need to run a recheck subquery. Initialize EPQ state if we
|
||||
* Need to run a recheck subquery. Initialize EPQ state if we
|
||||
* didn't do so already.
|
||||
*/
|
||||
if (!epq_started)
|
||||
@ -213,7 +213,7 @@ lnext:
|
||||
{
|
||||
/*
|
||||
* First, fetch a copy of any rows that were successfully locked
|
||||
* without any update having occurred. (We do this in a separate pass
|
||||
* without any update having occurred. (We do this in a separate pass
|
||||
* so as to avoid overhead in the common case where there are no
|
||||
* concurrent updates.)
|
||||
*/
|
||||
@ -318,7 +318,7 @@ ExecInitLockRows(LockRows *node, EState *estate, int eflags)
|
||||
|
||||
/*
|
||||
* Locate the ExecRowMark(s) that this node is responsible for, and
|
||||
* construct ExecAuxRowMarks for them. (InitPlan should already have
|
||||
* construct ExecAuxRowMarks for them. (InitPlan should already have
|
||||
* built the global list of ExecRowMarks.)
|
||||
*/
|
||||
lrstate->lr_arowMarks = NIL;
|
||||
@ -340,7 +340,7 @@ ExecInitLockRows(LockRows *node, EState *estate, int eflags)
|
||||
aerm = ExecBuildAuxRowMark(erm, outerPlan->targetlist);
|
||||
|
||||
/*
|
||||
* Only locking rowmarks go into our own list. Non-locking marks are
|
||||
* Only locking rowmarks go into our own list. Non-locking marks are
|
||||
* passed off to the EvalPlanQual machinery. This is because we don't
|
||||
* want to bother fetching non-locked rows unless we actually have to
|
||||
* do an EPQ recheck.
|
||||
|
@ -185,7 +185,7 @@ ExecInitMaterial(Material *node, EState *estate, int eflags)
|
||||
/*
|
||||
* Tuplestore's interpretation of the flag bits is subtly different from
|
||||
* the general executor meaning: it doesn't think BACKWARD necessarily
|
||||
* means "backwards all the way to start". If told to support BACKWARD we
|
||||
* means "backwards all the way to start". If told to support BACKWARD we
|
||||
* must include REWIND in the tuplestore eflags, else tuplestore_trim
|
||||
* might throw away too much.
|
||||
*/
|
||||
|
@ -32,7 +32,7 @@
|
||||
* /
|
||||
* MergeAppend---+------+------+--- nil
|
||||
* / \ | | |
|
||||
* nil nil ... ... ...
|
||||
* nil nil ... ... ...
|
||||
* subplans
|
||||
*/
|
||||
|
||||
|
@ -41,7 +41,7 @@
|
||||
*
|
||||
* Therefore, rather than directly executing the merge join clauses,
|
||||
* we evaluate the left and right key expressions separately and then
|
||||
* compare the columns one at a time (see MJCompare). The planner
|
||||
* compare the columns one at a time (see MJCompare). The planner
|
||||
* passes us enough information about the sort ordering of the inputs
|
||||
* to allow us to determine how to make the comparison. We may use the
|
||||
* appropriate btree comparison function, since Postgres' only notion
|
||||
@ -269,7 +269,7 @@ MJExamineQuals(List *mergeclauses,
|
||||
* input, since we assume mergejoin operators are strict. If the NULL
|
||||
* is in the first join column, and that column sorts nulls last, then
|
||||
* we can further conclude that no following tuple can match anything
|
||||
* either, since they must all have nulls in the first column. However,
|
||||
* either, since they must all have nulls in the first column. However,
|
||||
* that case is only interesting if we're not in FillOuter mode, else
|
||||
* we have to visit all the tuples anyway.
|
||||
*
|
||||
@ -325,7 +325,7 @@ MJEvalOuterValues(MergeJoinState *mergestate)
|
||||
/*
|
||||
* MJEvalInnerValues
|
||||
*
|
||||
* Same as above, but for the inner tuple. Here, we have to be prepared
|
||||
* Same as above, but for the inner tuple. Here, we have to be prepared
|
||||
* to load data from either the true current inner, or the marked inner,
|
||||
* so caller must tell us which slot to load from.
|
||||
*/
|
||||
@ -736,7 +736,7 @@ ExecMergeJoin(MergeJoinState *node)
|
||||
case MJEVAL_MATCHABLE:
|
||||
|
||||
/*
|
||||
* OK, we have the initial tuples. Begin by skipping
|
||||
* OK, we have the initial tuples. Begin by skipping
|
||||
* non-matching tuples.
|
||||
*/
|
||||
node->mj_JoinState = EXEC_MJ_SKIP_TEST;
|
||||
@ -1131,7 +1131,7 @@ ExecMergeJoin(MergeJoinState *node)
|
||||
* which means that all subsequent outer tuples will be
|
||||
* larger than our marked inner tuples. So we need not
|
||||
* revisit any of the marked tuples but can proceed to
|
||||
* look for a match to the current inner. If there's
|
||||
* look for a match to the current inner. If there's
|
||||
* no more inners, no more matches are possible.
|
||||
* ----------------
|
||||
*/
|
||||
@ -1522,7 +1522,7 @@ ExecInitMergeJoin(MergeJoin *node, EState *estate, int eflags)
|
||||
* For certain types of inner child nodes, it is advantageous to issue
|
||||
* MARK every time we advance past an inner tuple we will never return to.
|
||||
* For other types, MARK on a tuple we cannot return to is a waste of
|
||||
* cycles. Detect which case applies and set mj_ExtraMarks if we want to
|
||||
* cycles. Detect which case applies and set mj_ExtraMarks if we want to
|
||||
* issue "unnecessary" MARK calls.
|
||||
*
|
||||
* Currently, only Material wants the extra MARKs, and it will be helpful
|
||||
|
@ -30,7 +30,7 @@
|
||||
*
|
||||
* If the query specifies RETURNING, then the ModifyTable returns a
|
||||
* RETURNING tuple after completing each row insert, update, or delete.
|
||||
* It must be called again to continue the operation. Without RETURNING,
|
||||
* It must be called again to continue the operation. Without RETURNING,
|
||||
* we just loop within the node until all the work is done, then
|
||||
* return NULL. This avoids useless call/return overhead.
|
||||
*/
|
||||
@ -419,7 +419,7 @@ ldelete:;
|
||||
* proceed. We don't want to discard the original DELETE
|
||||
* while keeping the triggered actions based on its deletion;
|
||||
* and it would be no better to allow the original DELETE
|
||||
* while discarding updates that it triggered. The row update
|
||||
* while discarding updates that it triggered. The row update
|
||||
* carries some information that might be important according
|
||||
* to business rules; so throwing an error is the only safe
|
||||
* course.
|
||||
@ -491,7 +491,7 @@ ldelete:;
|
||||
{
|
||||
/*
|
||||
* We have to put the target tuple into a slot, which means first we
|
||||
* gotta fetch it. We can use the trigger tuple slot.
|
||||
* gotta fetch it. We can use the trigger tuple slot.
|
||||
*/
|
||||
TupleTableSlot *rslot;
|
||||
HeapTupleData deltuple;
|
||||
@ -549,7 +549,7 @@ ldelete:;
|
||||
* note: we can't run UPDATE queries with transactions
|
||||
* off because UPDATEs are actually INSERTs and our
|
||||
* scan will mistakenly loop forever, updating the tuple
|
||||
* it just inserted.. This should be fixed but until it
|
||||
* it just inserted.. This should be fixed but until it
|
||||
* is, we don't want to get stuck in an infinite loop
|
||||
* which corrupts your database..
|
||||
*
|
||||
@ -657,7 +657,7 @@ ExecUpdate(ItemPointer tupleid,
|
||||
*
|
||||
* If we generate a new candidate tuple after EvalPlanQual testing, we
|
||||
* must loop back here and recheck constraints. (We don't need to
|
||||
* redo triggers, however. If there are any BEFORE triggers then
|
||||
* redo triggers, however. If there are any BEFORE triggers then
|
||||
* trigger.c will have done heap_lock_tuple to lock the correct tuple,
|
||||
* so there's no need to do them again.)
|
||||
*/
|
||||
@ -900,7 +900,7 @@ ExecModifyTable(ModifyTableState *node)
|
||||
|
||||
/*
|
||||
* es_result_relation_info must point to the currently active result
|
||||
* relation while we are within this ModifyTable node. Even though
|
||||
* relation while we are within this ModifyTable node. Even though
|
||||
* ModifyTable nodes can't be nested statically, they can be nested
|
||||
* dynamically (since our subplan could include a reference to a modifying
|
||||
* CTE). So we have to save and restore the caller's value.
|
||||
@ -916,7 +916,7 @@ ExecModifyTable(ModifyTableState *node)
|
||||
for (;;)
|
||||
{
|
||||
/*
|
||||
* Reset the per-output-tuple exprcontext. This is needed because
|
||||
* Reset the per-output-tuple exprcontext. This is needed because
|
||||
* triggers expect to use that context as workspace. It's a bit ugly
|
||||
* to do this below the top level of the plan, however. We might need
|
||||
* to rethink this later.
|
||||
@ -973,6 +973,7 @@ ExecModifyTable(ModifyTableState *node)
|
||||
* ctid!! */
|
||||
tupleid = &tuple_ctid;
|
||||
}
|
||||
|
||||
/*
|
||||
* Use the wholerow attribute, when available, to reconstruct
|
||||
* the old relation tuple.
|
||||
@ -1105,7 +1106,7 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags)
|
||||
* call ExecInitNode on each of the plans to be executed and save the
|
||||
* results into the array "mt_plans". This is also a convenient place to
|
||||
* verify that the proposed target relations are valid and open their
|
||||
* indexes for insertion of new index entries. Note we *must* set
|
||||
* indexes for insertion of new index entries. Note we *must* set
|
||||
* estate->es_result_relation_info correctly while we initialize each
|
||||
* sub-plan; ExecContextForcesOids depends on that!
|
||||
*/
|
||||
@ -1125,7 +1126,7 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags)
|
||||
/*
|
||||
* If there are indices on the result relation, open them and save
|
||||
* descriptors in the result relation info, so that we can add new
|
||||
* index entries for the tuples we add/update. We need not do this
|
||||
* index entries for the tuples we add/update. We need not do this
|
||||
* for a DELETE, however, since deletion doesn't affect indexes. Also,
|
||||
* inside an EvalPlanQual operation, the indexes might be open
|
||||
* already, since we share the resultrel state with the original
|
||||
@ -1175,6 +1176,7 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags)
|
||||
WithCheckOption *wco = (WithCheckOption *) lfirst(ll);
|
||||
ExprState *wcoExpr = ExecInitExpr((Expr *) wco->qual,
|
||||
mtstate->mt_plans[i]);
|
||||
|
||||
wcoExprs = lappend(wcoExprs, wcoExpr);
|
||||
}
|
||||
|
||||
@ -1194,7 +1196,7 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags)
|
||||
|
||||
/*
|
||||
* Initialize result tuple slot and assign its rowtype using the first
|
||||
* RETURNING list. We assume the rest will look the same.
|
||||
* RETURNING list. We assume the rest will look the same.
|
||||
*/
|
||||
tupDesc = ExecTypeFromTL((List *) linitial(node->returningLists),
|
||||
false);
|
||||
@ -1240,7 +1242,7 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags)
|
||||
/*
|
||||
* If we have any secondary relations in an UPDATE or DELETE, they need to
|
||||
* be treated like non-locked relations in SELECT FOR UPDATE, ie, the
|
||||
* EvalPlanQual mechanism needs to be told about them. Locate the
|
||||
* EvalPlanQual mechanism needs to be told about them. Locate the
|
||||
* relevant ExecRowMarks.
|
||||
*/
|
||||
foreach(l, node->rowMarks)
|
||||
@ -1281,7 +1283,7 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags)
|
||||
* attribute present --- no need to look first.
|
||||
*
|
||||
* If there are multiple result relations, each one needs its own junk
|
||||
* filter. Note multiple rels are only possible for UPDATE/DELETE, so we
|
||||
* filter. Note multiple rels are only possible for UPDATE/DELETE, so we
|
||||
* can't be fooled by some needing a filter and some not.
|
||||
*
|
||||
* This section of code is also a convenient place to verify that the
|
||||
|
@ -316,7 +316,7 @@ ExecReScanRecursiveUnion(RecursiveUnionState *node)
|
||||
|
||||
/*
|
||||
* if chgParam of subnode is not null then plan will be re-scanned by
|
||||
* first ExecProcNode. Because of above, we only have to do this to the
|
||||
* first ExecProcNode. Because of above, we only have to do this to the
|
||||
* non-recursive term.
|
||||
*/
|
||||
if (outerPlan->chgParam == NULL)
|
||||
|
@ -5,7 +5,7 @@
|
||||
*
|
||||
* The input of a SetOp node consists of tuples from two relations,
|
||||
* which have been combined into one dataset, with a junk attribute added
|
||||
* that shows which relation each tuple came from. In SETOP_SORTED mode,
|
||||
* that shows which relation each tuple came from. In SETOP_SORTED mode,
|
||||
* the input has furthermore been sorted according to all the grouping
|
||||
* columns (ie, all the non-junk attributes). The SetOp node scans each
|
||||
* group of identical tuples to determine how many came from each input
|
||||
@ -18,7 +18,7 @@
|
||||
* relation is the left-hand one for EXCEPT, and tries to make the smaller
|
||||
* input relation come first for INTERSECT. We build a hash table in memory
|
||||
* with one entry for each group of identical tuples, and count the number of
|
||||
* tuples in the group from each relation. After seeing all the input, we
|
||||
* tuples in the group from each relation. After seeing all the input, we
|
||||
* scan the hashtable and generate the correct output using those counts.
|
||||
* We can avoid making hashtable entries for any tuples appearing only in the
|
||||
* second input relation, since they cannot result in any output.
|
||||
@ -268,7 +268,7 @@ setop_retrieve_direct(SetOpState *setopstate)
|
||||
|
||||
/*
|
||||
* Store the copied first input tuple in the tuple table slot reserved
|
||||
* for it. The tuple will be deleted when it is cleared from the
|
||||
* for it. The tuple will be deleted when it is cleared from the
|
||||
* slot.
|
||||
*/
|
||||
ExecStoreTuple(setopstate->grp_firstTuple,
|
||||
|
@ -261,12 +261,12 @@ ExecScanSubPlan(SubPlanState *node,
|
||||
* semantics for ANY_SUBLINK or AND semantics for ALL_SUBLINK.
|
||||
* (ROWCOMPARE_SUBLINK doesn't allow multiple tuples from the subplan.)
|
||||
* NULL results from the combining operators are handled according to the
|
||||
* usual SQL semantics for OR and AND. The result for no input tuples is
|
||||
* usual SQL semantics for OR and AND. The result for no input tuples is
|
||||
* FALSE for ANY_SUBLINK, TRUE for ALL_SUBLINK, NULL for
|
||||
* ROWCOMPARE_SUBLINK.
|
||||
*
|
||||
* For EXPR_SUBLINK we require the subplan to produce no more than one
|
||||
* tuple, else an error is raised. If zero tuples are produced, we return
|
||||
* tuple, else an error is raised. If zero tuples are produced, we return
|
||||
* NULL. Assuming we get a tuple, we just use its first column (there can
|
||||
* be only one non-junk column in this case).
|
||||
*
|
||||
@ -409,7 +409,7 @@ ExecScanSubPlan(SubPlanState *node,
|
||||
else if (!found)
|
||||
{
|
||||
/*
|
||||
* deal with empty subplan result. result/isNull were previously
|
||||
* deal with empty subplan result. result/isNull were previously
|
||||
* initialized correctly for all sublink types except EXPR and
|
||||
* ROWCOMPARE; for those, return NULL.
|
||||
*/
|
||||
@ -894,7 +894,7 @@ ExecInitSubPlan(SubPlan *subplan, PlanState *parent)
|
||||
*
|
||||
* This is called from ExecEvalParamExec() when the value of a PARAM_EXEC
|
||||
* parameter is requested and the param's execPlan field is set (indicating
|
||||
* that the param has not yet been evaluated). This allows lazy evaluation
|
||||
* that the param has not yet been evaluated). This allows lazy evaluation
|
||||
* of initplans: we don't run the subplan until/unless we need its output.
|
||||
* Note that this routine MUST clear the execPlan fields of the plan's
|
||||
* output parameters after evaluating them!
|
||||
@ -1122,7 +1122,7 @@ ExecInitAlternativeSubPlan(AlternativeSubPlan *asplan, PlanState *parent)
|
||||
/*
|
||||
* Select the one to be used. For this, we need an estimate of the number
|
||||
* of executions of the subplan. We use the number of output rows
|
||||
* expected from the parent plan node. This is a good estimate if we are
|
||||
* expected from the parent plan node. This is a good estimate if we are
|
||||
* in the parent's targetlist, and an underestimate (but probably not by
|
||||
* more than a factor of 2) if we are in the qual.
|
||||
*/
|
||||
|
@ -194,7 +194,7 @@ ExecReScanSubqueryScan(SubqueryScanState *node)
|
||||
|
||||
/*
|
||||
* ExecReScan doesn't know about my subplan, so I have to do
|
||||
* changed-parameter signaling myself. This is just as well, because the
|
||||
* changed-parameter signaling myself. This is just as well, because the
|
||||
* subplan has its own memory context in which its chgParam state lives.
|
||||
*/
|
||||
if (node->ss.ps.chgParam != NULL)
|
||||
|
@ -4,7 +4,7 @@
|
||||
* Routines to handle unique'ing of queries where appropriate
|
||||
*
|
||||
* Unique is a very simple node type that just filters out duplicate
|
||||
* tuples from a stream of sorted tuples from its subplan. It's essentially
|
||||
* tuples from a stream of sorted tuples from its subplan. It's essentially
|
||||
* a dumbed-down form of Group: the duplicate-removal functionality is
|
||||
* identical. However, Unique doesn't do projection nor qual checking,
|
||||
* so it's marginally more efficient for cases where neither is needed.
|
||||
|
@ -215,7 +215,7 @@ ExecInitValuesScan(ValuesScan *node, EState *estate, int eflags)
|
||||
planstate = &scanstate->ss.ps;
|
||||
|
||||
/*
|
||||
* Create expression contexts. We need two, one for per-sublist
|
||||
* Create expression contexts. We need two, one for per-sublist
|
||||
* processing and one for execScan.c to use for quals and projections. We
|
||||
* cheat a little by using ExecAssignExprContext() to build both.
|
||||
*/
|
||||
|
@ -4,7 +4,7 @@
|
||||
* routines to handle WindowAgg nodes.
|
||||
*
|
||||
* A WindowAgg node evaluates "window functions" across suitable partitions
|
||||
* of the input tuple set. Any one WindowAgg works for just a single window
|
||||
* of the input tuple set. Any one WindowAgg works for just a single window
|
||||
* specification, though it can evaluate multiple window functions sharing
|
||||
* identical window specifications. The input tuples are required to be
|
||||
* delivered in sorted order, with the PARTITION BY columns (if any) as
|
||||
@ -14,7 +14,7 @@
|
||||
*
|
||||
* Since window functions can require access to any or all of the rows in
|
||||
* the current partition, we accumulate rows of the partition into a
|
||||
* tuplestore. The window functions are called using the WindowObject API
|
||||
* tuplestore. The window functions are called using the WindowObject API
|
||||
* so that they can access those rows as needed.
|
||||
*
|
||||
* We also support using plain aggregate functions as window functions.
|
||||
@ -280,7 +280,7 @@ advance_windowaggregate(WindowAggState *winstate,
|
||||
{
|
||||
/*
|
||||
* For a strict transfn, nothing happens when there's a NULL input; we
|
||||
* just keep the prior transValue. Note transValueCount doesn't
|
||||
* just keep the prior transValue. Note transValueCount doesn't
|
||||
* change either.
|
||||
*/
|
||||
for (i = 1; i <= numArguments; i++)
|
||||
@ -330,7 +330,7 @@ advance_windowaggregate(WindowAggState *winstate,
|
||||
}
|
||||
|
||||
/*
|
||||
* OK to call the transition function. Set winstate->curaggcontext while
|
||||
* OK to call the transition function. Set winstate->curaggcontext while
|
||||
* calling it, for possible use by AggCheckCallContext.
|
||||
*/
|
||||
InitFunctionCallInfoData(*fcinfo, &(peraggstate->transfn),
|
||||
@ -362,7 +362,7 @@ advance_windowaggregate(WindowAggState *winstate,
|
||||
|
||||
/*
|
||||
* If pass-by-ref datatype, must copy the new value into aggcontext and
|
||||
* pfree the prior transValue. But if transfn returned a pointer to its
|
||||
* pfree the prior transValue. But if transfn returned a pointer to its
|
||||
* first input, we don't need to do anything.
|
||||
*/
|
||||
if (!peraggstate->transtypeByVal &&
|
||||
@ -485,7 +485,7 @@ advance_windowaggregate_base(WindowAggState *winstate,
|
||||
}
|
||||
|
||||
/*
|
||||
* OK to call the inverse transition function. Set
|
||||
* OK to call the inverse transition function. Set
|
||||
* winstate->curaggcontext while calling it, for possible use by
|
||||
* AggCheckCallContext.
|
||||
*/
|
||||
@ -513,7 +513,7 @@ advance_windowaggregate_base(WindowAggState *winstate,
|
||||
|
||||
/*
|
||||
* If pass-by-ref datatype, must copy the new value into aggcontext and
|
||||
* pfree the prior transValue. But if invtransfn returned a pointer to
|
||||
* pfree the prior transValue. But if invtransfn returned a pointer to
|
||||
* its first input, we don't need to do anything.
|
||||
*
|
||||
* Note: the checks for null values here will never fire, but it seems
|
||||
@ -827,7 +827,7 @@ eval_windowaggregates(WindowAggState *winstate)
|
||||
*
|
||||
* We assume that aggregates using the shared context always restart if
|
||||
* *any* aggregate restarts, and we may thus clean up the shared
|
||||
* aggcontext if that is the case. Private aggcontexts are reset by
|
||||
* aggcontext if that is the case. Private aggcontexts are reset by
|
||||
* initialize_windowaggregate() if their owning aggregate restarts. If we
|
||||
* aren't restarting an aggregate, we need to free any previously saved
|
||||
* result for it, else we'll leak memory.
|
||||
@ -864,9 +864,9 @@ eval_windowaggregates(WindowAggState *winstate)
|
||||
* (i.e., frameheadpos) and aggregatedupto, while restarted aggregates
|
||||
* contain no rows. If there are any restarted aggregates, we must thus
|
||||
* begin aggregating anew at frameheadpos, otherwise we may simply
|
||||
* continue at aggregatedupto. We must remember the old value of
|
||||
* continue at aggregatedupto. We must remember the old value of
|
||||
* aggregatedupto to know how long to skip advancing non-restarted
|
||||
* aggregates. If we modify aggregatedupto, we must also clear
|
||||
* aggregates. If we modify aggregatedupto, we must also clear
|
||||
* agg_row_slot, per the loop invariant below.
|
||||
*/
|
||||
aggregatedupto_nonrestarted = winstate->aggregatedupto;
|
||||
@ -881,7 +881,7 @@ eval_windowaggregates(WindowAggState *winstate)
|
||||
* Advance until we reach a row not in frame (or end of partition).
|
||||
*
|
||||
* Note the loop invariant: agg_row_slot is either empty or holds the row
|
||||
* at position aggregatedupto. We advance aggregatedupto after processing
|
||||
* at position aggregatedupto. We advance aggregatedupto after processing
|
||||
* a row.
|
||||
*/
|
||||
for (;;)
|
||||
@ -1142,7 +1142,7 @@ spool_tuples(WindowAggState *winstate, int64 pos)
|
||||
|
||||
/*
|
||||
* If the tuplestore has spilled to disk, alternate reading and writing
|
||||
* becomes quite expensive due to frequent buffer flushes. It's cheaper
|
||||
* becomes quite expensive due to frequent buffer flushes. It's cheaper
|
||||
* to force the entire partition to get spooled in one go.
|
||||
*
|
||||
* XXX this is a horrid kluge --- it'd be better to fix the performance
|
||||
@ -1239,7 +1239,7 @@ release_partition(WindowAggState *winstate)
|
||||
* to our window framing rule
|
||||
*
|
||||
* The caller must have already determined that the row is in the partition
|
||||
* and fetched it into a slot. This function just encapsulates the framing
|
||||
* and fetched it into a slot. This function just encapsulates the framing
|
||||
* rules.
|
||||
*/
|
||||
static bool
|
||||
@ -1341,7 +1341,7 @@ row_is_in_frame(WindowAggState *winstate, int64 pos, TupleTableSlot *slot)
|
||||
*
|
||||
* Uses the winobj's read pointer for any required fetches; hence, if the
|
||||
* frame mode is one that requires row comparisons, the winobj's mark must
|
||||
* not be past the currently known frame head. Also uses the specified slot
|
||||
* not be past the currently known frame head. Also uses the specified slot
|
||||
* for any required fetches.
|
||||
*/
|
||||
static void
|
||||
@ -1446,7 +1446,7 @@ update_frameheadpos(WindowObject winobj, TupleTableSlot *slot)
|
||||
*
|
||||
* Uses the winobj's read pointer for any required fetches; hence, if the
|
||||
* frame mode is one that requires row comparisons, the winobj's mark must
|
||||
* not be past the currently known frame tail. Also uses the specified slot
|
||||
* not be past the currently known frame tail. Also uses the specified slot
|
||||
* for any required fetches.
|
||||
*/
|
||||
static void
|
||||
@ -1789,8 +1789,8 @@ ExecInitWindowAgg(WindowAgg *node, EState *estate, int eflags)
|
||||
winstate->ss.ps.state = estate;
|
||||
|
||||
/*
|
||||
* Create expression contexts. We need two, one for per-input-tuple
|
||||
* processing and one for per-output-tuple processing. We cheat a little
|
||||
* Create expression contexts. We need two, one for per-input-tuple
|
||||
* processing and one for per-output-tuple processing. We cheat a little
|
||||
* by using ExecAssignExprContext() to build both.
|
||||
*/
|
||||
ExecAssignExprContext(estate, &winstate->ss.ps);
|
||||
@ -2288,7 +2288,7 @@ initialize_peragg(WindowAggState *winstate, WindowFunc *wfunc,
|
||||
|
||||
/*
|
||||
* Insist that forward and inverse transition functions have the same
|
||||
* strictness setting. Allowing them to differ would require handling
|
||||
* strictness setting. Allowing them to differ would require handling
|
||||
* more special cases in advance_windowaggregate and
|
||||
* advance_windowaggregate_base, for no discernible benefit. This should
|
||||
* have been checked at agg definition time, but we must check again in
|
||||
@ -2467,7 +2467,7 @@ window_gettupleslot(WindowObject winobj, int64 pos, TupleTableSlot *slot)
|
||||
* requested amount of space. Subsequent calls just return the same chunk.
|
||||
*
|
||||
* Memory obtained this way is normally used to hold state that should be
|
||||
* automatically reset for each new partition. If a window function wants
|
||||
* automatically reset for each new partition. If a window function wants
|
||||
* to hold state across the whole query, fcinfo->fn_extra can be used in the
|
||||
* usual way for that.
|
||||
*/
|
||||
|
@ -82,7 +82,7 @@ ExecWorkTableScan(WorkTableScanState *node)
|
||||
{
|
||||
/*
|
||||
* On the first call, find the ancestor RecursiveUnion's state via the
|
||||
* Param slot reserved for it. (We can't do this during node init because
|
||||
* Param slot reserved for it. (We can't do this during node init because
|
||||
* there are corner cases where we'll get the init call before the
|
||||
* RecursiveUnion does.)
|
||||
*/
|
||||
|
@ -256,7 +256,7 @@ AtEOSubXact_SPI(bool isCommit, SubTransactionId mySubid)
|
||||
}
|
||||
|
||||
/*
|
||||
* Pop the stack entry and reset global variables. Unlike
|
||||
* Pop the stack entry and reset global variables. Unlike
|
||||
* SPI_finish(), we don't risk switching to memory contexts that might
|
||||
* be already gone.
|
||||
*/
|
||||
@ -1306,7 +1306,7 @@ SPI_cursor_open_internal(const char *name, SPIPlanPtr plan,
|
||||
}
|
||||
|
||||
/*
|
||||
* Disallow SCROLL with SELECT FOR UPDATE. This is not redundant with the
|
||||
* Disallow SCROLL with SELECT FOR UPDATE. This is not redundant with the
|
||||
* check in transformDeclareCursorStmt because the cursor options might
|
||||
* not have come through there.
|
||||
*/
|
||||
@ -1560,7 +1560,7 @@ SPI_plan_is_valid(SPIPlanPtr plan)
|
||||
/*
|
||||
* SPI_result_code_string --- convert any SPI return code to a string
|
||||
*
|
||||
* This is often useful in error messages. Most callers will probably
|
||||
* This is often useful in error messages. Most callers will probably
|
||||
* only pass negative (error-case) codes, but for generality we recognize
|
||||
* the success codes too.
|
||||
*/
|
||||
@ -1630,7 +1630,7 @@ SPI_result_code_string(int code)
|
||||
* CachedPlanSources.
|
||||
*
|
||||
* This is exported so that pl/pgsql can use it (this beats letting pl/pgsql
|
||||
* look directly into the SPIPlan for itself). It's not documented in
|
||||
* look directly into the SPIPlan for itself). It's not documented in
|
||||
* spi.sgml because we'd just as soon not have too many places using this.
|
||||
*/
|
||||
List *
|
||||
@ -1646,7 +1646,7 @@ SPI_plan_get_plan_sources(SPIPlanPtr plan)
|
||||
* return NULL. Caller is responsible for doing ReleaseCachedPlan().
|
||||
*
|
||||
* This is exported so that pl/pgsql can use it (this beats letting pl/pgsql
|
||||
* look directly into the SPIPlan for itself). It's not documented in
|
||||
* look directly into the SPIPlan for itself). It's not documented in
|
||||
* spi.sgml because we'd just as soon not have too many places using this.
|
||||
*/
|
||||
CachedPlan *
|
||||
@ -2204,7 +2204,7 @@ _SPI_execute_plan(SPIPlanPtr plan, ParamListInfo paramLI,
|
||||
|
||||
/*
|
||||
* The last canSetTag query sets the status values returned to the
|
||||
* caller. Be careful to free any tuptables not returned, to
|
||||
* caller. Be careful to free any tuptables not returned, to
|
||||
* avoid intratransaction memory leak.
|
||||
*/
|
||||
if (canSetTag)
|
||||
|
@ -5,7 +5,7 @@
|
||||
* a Tuplestore.
|
||||
*
|
||||
* Optionally, we can force detoasting (but not decompression) of out-of-line
|
||||
* toasted values. This is to support cursors WITH HOLD, which must retain
|
||||
* toasted values. This is to support cursors WITH HOLD, which must retain
|
||||
* data even if the underlying table is dropped.
|
||||
*
|
||||
*
|
||||
|
Reference in New Issue
Block a user