mirror of
https://github.com/postgres/postgres.git
synced 2025-06-23 14:01:44 +03:00
Remove tabs after spaces in C comments
This was not changed in HEAD, but will be done later as part of a pgindent run. Future pgindent runs will also do this. Report by Tom Lane Backpatch through all supported branches, but not HEAD
This commit is contained in:
@ -643,8 +643,8 @@ restore_toc_entry(ArchiveHandle *AH, TocEntry *te,
|
||||
/*
|
||||
* In parallel restore, if we created the table earlier in
|
||||
* the run then we wrap the COPY in a transaction and
|
||||
* precede it with a TRUNCATE. If archiving is not on
|
||||
* this prevents WAL-logging the COPY. This obtains a
|
||||
* precede it with a TRUNCATE. If archiving is not on
|
||||
* this prevents WAL-logging the COPY. This obtains a
|
||||
* speedup similar to that from using single_txn mode in
|
||||
* non-parallel restores.
|
||||
*/
|
||||
@ -1555,7 +1555,7 @@ _moveBefore(ArchiveHandle *AH, TocEntry *pos, TocEntry *te)
|
||||
* items.
|
||||
*
|
||||
* The arrays are indexed by dump ID (so entry zero is unused). Note that the
|
||||
* array entries run only up to maxDumpId. We might see dependency dump IDs
|
||||
* array entries run only up to maxDumpId. We might see dependency dump IDs
|
||||
* beyond that (if the dump was partial); so always check the array bound
|
||||
* before trying to touch an array entry.
|
||||
*/
|
||||
@ -1579,7 +1579,7 @@ buildTocEntryArrays(ArchiveHandle *AH)
|
||||
|
||||
/*
|
||||
* tableDataId provides the TABLE DATA item's dump ID for each TABLE
|
||||
* TOC entry that has a DATA item. We compute this by reversing the
|
||||
* TOC entry that has a DATA item. We compute this by reversing the
|
||||
* TABLE DATA item's dependency, knowing that a TABLE DATA item has
|
||||
* just one dependency and it is the TABLE item.
|
||||
*/
|
||||
@ -2618,7 +2618,7 @@ _doSetSessionAuth(ArchiveHandle *AH, const char *user)
|
||||
appendPQExpBuffer(cmd, "SET SESSION AUTHORIZATION ");
|
||||
|
||||
/*
|
||||
* SQL requires a string literal here. Might as well be correct.
|
||||
* SQL requires a string literal here. Might as well be correct.
|
||||
*/
|
||||
if (user && *user)
|
||||
appendStringLiteralAHX(cmd, user, AH);
|
||||
@ -2749,7 +2749,7 @@ _becomeUser(ArchiveHandle *AH, const char *user)
|
||||
}
|
||||
|
||||
/*
|
||||
* Become the owner of the given TOC entry object. If
|
||||
* Become the owner of the given TOC entry object. If
|
||||
* changes in ownership are not allowed, this doesn't do anything.
|
||||
*/
|
||||
static void
|
||||
@ -3242,7 +3242,7 @@ ReadHead(ArchiveHandle *AH)
|
||||
/*
|
||||
* If we haven't already read the header, do so.
|
||||
*
|
||||
* NB: this code must agree with _discoverArchiveFormat(). Maybe find a
|
||||
* NB: this code must agree with _discoverArchiveFormat(). Maybe find a
|
||||
* way to unify the cases?
|
||||
*/
|
||||
if (!AH->readHeader)
|
||||
@ -3353,7 +3353,7 @@ checkSeek(FILE *fp)
|
||||
return false;
|
||||
|
||||
/*
|
||||
* Check that fseeko(SEEK_SET) works, too. NB: we used to try to test
|
||||
* Check that fseeko(SEEK_SET) works, too. NB: we used to try to test
|
||||
* this with fseeko(fp, 0, SEEK_CUR). But some platforms treat that as a
|
||||
* successful no-op even on files that are otherwise unseekable.
|
||||
*/
|
||||
@ -3393,7 +3393,7 @@ dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim)
|
||||
*
|
||||
* Work is done in three phases.
|
||||
* First we process all SECTION_PRE_DATA tocEntries, in a single connection,
|
||||
* just as for a standard restore. Second we process the remaining non-ACL
|
||||
* just as for a standard restore. Second we process the remaining non-ACL
|
||||
* steps in parallel worker children (threads on Windows, processes on Unix),
|
||||
* each of which connects separately to the database. Finally we process all
|
||||
* the ACL entries in a single connection (that happens back in
|
||||
@ -3415,7 +3415,7 @@ restore_toc_entries_prefork(ArchiveHandle *AH)
|
||||
* Do all the early stuff in a single connection in the parent. There's no
|
||||
* great point in running it in parallel, in fact it will actually run
|
||||
* faster in a single connection because we avoid all the connection and
|
||||
* setup overhead. Also, pre-9.2 pg_dump versions were not very good
|
||||
* setup overhead. Also, pre-9.2 pg_dump versions were not very good
|
||||
* about showing all the dependencies of SECTION_PRE_DATA items, so we do
|
||||
* not risk trying to process them out-of-order.
|
||||
*
|
||||
@ -3461,7 +3461,7 @@ restore_toc_entries_prefork(ArchiveHandle *AH)
|
||||
}
|
||||
|
||||
/*
|
||||
* Now close parent connection in prep for parallel steps. We do this
|
||||
* Now close parent connection in prep for parallel steps. We do this
|
||||
* mainly to ensure that we don't exceed the specified number of parallel
|
||||
* connections.
|
||||
*/
|
||||
@ -3506,7 +3506,7 @@ restore_toc_entries_parallel(ArchiveHandle *AH, ParallelState *pstate,
|
||||
|
||||
/*
|
||||
* Initialize the lists of ready items, the list for pending items has
|
||||
* already been initialized in the caller. After this setup, the pending
|
||||
* already been initialized in the caller. After this setup, the pending
|
||||
* list is everything that needs to be done but is blocked by one or more
|
||||
* dependencies, while the ready list contains items that have no
|
||||
* remaining dependencies. Note: we don't yet filter out entries that
|
||||
|
Reference in New Issue
Block a user