mirror of
https://github.com/postgres/postgres.git
synced 2025-07-14 08:21:07 +03:00
Fix assorted inconsistencies.
There were a number of issues in the recent commits which include typos, code and comments mismatch, leftover function declarations. Fix them. Reported-by: Alexander Lakhin Author: Alexander Lakhin, Amit Kapila and Amit Langote Reviewed-by: Amit Kapila Discussion: https://postgr.es/m/ef0c0232-0c1d-3a35-63d4-0ebd06e31387@gmail.com
This commit is contained in:
@ -52,9 +52,6 @@
|
||||
* (v) make sure the lock level is set correctly for that operation
|
||||
* (vi) don't forget to document the option
|
||||
*
|
||||
* Note that we don't handle "oids" in relOpts because it is handled by
|
||||
* interpretOidsOption().
|
||||
*
|
||||
* The default choice for any new option should be AccessExclusiveLock.
|
||||
* In some cases the lock level can be reduced from there, but the lock
|
||||
* level chosen should always conflict with itself to ensure that multiple
|
||||
|
@ -239,8 +239,8 @@ initscan(HeapScanDesc scan, ScanKey key, bool keep_startblock)
|
||||
* behaviors, independently of the size of the table; also there is a GUC
|
||||
* variable that can disable synchronized scanning.)
|
||||
*
|
||||
* Note that heap_parallelscan_initialize has a very similar test; if you
|
||||
* change this, consider changing that one, too.
|
||||
* Note that table_block_parallelscan_initialize has a very similar test;
|
||||
* if you change this, consider changing that one, too.
|
||||
*/
|
||||
if (!RelationUsesLocalBuffers(scan->rs_base.rs_rd) &&
|
||||
scan->rs_nblocks > NBuffers / 4)
|
||||
@ -1396,15 +1396,6 @@ heap_getnextslot(TableScanDesc sscan, ScanDirection direction, TupleTableSlot *s
|
||||
* If the tuple is found but fails the time qual check, then false is returned
|
||||
* but tuple->t_data is left pointing to the tuple.
|
||||
*
|
||||
* keep_buf determines what is done with the buffer in the false-result cases.
|
||||
* When the caller specifies keep_buf = true, we retain the pin on the buffer
|
||||
* and return it in *userbuf (so the caller must eventually unpin it); when
|
||||
* keep_buf = false, the pin is released and *userbuf is set to InvalidBuffer.
|
||||
*
|
||||
* stats_relation is the relation to charge the heap_fetch operation against
|
||||
* for statistical purposes. (This could be the heap rel itself, an
|
||||
* associated index, or NULL to not count the fetch at all.)
|
||||
*
|
||||
* heap_fetch does not follow HOT chains: only the exact TID requested will
|
||||
* be fetched.
|
||||
*
|
||||
@ -7085,7 +7076,7 @@ heap_compute_xid_horizon_for_tuples(Relation rel,
|
||||
* Conjecture: if hitemid is dead then it had xids before the xids
|
||||
* marked on LP_NORMAL items. So we just ignore this item and move
|
||||
* onto the next, for the purposes of calculating
|
||||
* latestRemovedxids.
|
||||
* latestRemovedXid.
|
||||
*/
|
||||
}
|
||||
else
|
||||
|
@ -286,7 +286,7 @@ heapam_tuple_insert_speculative(Relation relation, TupleTableSlot *slot,
|
||||
|
||||
static void
|
||||
heapam_tuple_complete_speculative(Relation relation, TupleTableSlot *slot,
|
||||
uint32 spekToken, bool succeeded)
|
||||
uint32 specToken, bool succeeded)
|
||||
{
|
||||
bool shouldFree = true;
|
||||
HeapTuple tuple = ExecFetchSlotHeapTuple(slot, true, &shouldFree);
|
||||
|
@ -350,7 +350,7 @@ end_heap_rewrite(RewriteState state)
|
||||
*
|
||||
* It's obvious that we must do this when not WAL-logging. It's less
|
||||
* obvious that we have to do it even if we did WAL-log the pages. The
|
||||
* reason is the same as in tablecmds.c's copy_relation_data(): we're
|
||||
* reason is the same as in storage.c's RelationCopyStorage(): we're
|
||||
* writing data that's not in shared buffers, and so a CHECKPOINT
|
||||
* occurring during the rewriteheap operation won't have fsync'd data we
|
||||
* wrote before the checkpoint.
|
||||
|
@ -91,8 +91,8 @@ int synchronous_commit = SYNCHRONOUS_COMMIT_ON;
|
||||
* need to return the same answers in the parallel worker as they would have
|
||||
* in the user backend, so we need some additional bookkeeping.
|
||||
*
|
||||
* XactTopTransactionId stores the XID of our toplevel transaction, which
|
||||
* will be the same as TopTransactionState.transactionId in an ordinary
|
||||
* XactTopFullTransactionId stores the XID of our toplevel transaction, which
|
||||
* will be the same as TopTransactionState.fullTransactionId in an ordinary
|
||||
* backend; but in a parallel backend, which does not have the entire
|
||||
* transaction state, it will instead be copied from the backend that started
|
||||
* the parallel operation.
|
||||
|
@ -314,8 +314,6 @@ static bool recoveryStopAfter;
|
||||
*
|
||||
* recoveryTargetTLI: the currently understood target timeline; changes
|
||||
*
|
||||
* recoveryTargetIsLatest: was the requested target timeline 'latest'?
|
||||
*
|
||||
* expectedTLEs: a list of TimeLineHistoryEntries for recoveryTargetTLI and the timelines of
|
||||
* its known parents, newest first (so recoveryTargetTLI is always the
|
||||
* first list member). Only these TLIs are expected to be seen in the WAL
|
||||
|
@ -1024,7 +1024,7 @@ log_newpage_buffer(Buffer buffer, bool page_std)
|
||||
/*
|
||||
* WAL-log a range of blocks in a relation.
|
||||
*
|
||||
* An image of all pages with block numbers 'startblk' <= X < 'endblock' is
|
||||
* An image of all pages with block numbers 'startblk' <= X < 'endblk' is
|
||||
* written to the WAL. If the range is large, this is done in multiple WAL
|
||||
* records.
|
||||
*
|
||||
|
Reference in New Issue
Block a user