1
0
mirror of https://github.com/postgres/postgres.git synced 2025-12-24 06:01:07 +03:00

Avoid parsing catalog data twice during BKI file construction.

In the wake of commit 5602265f7, we were doing duplicate-OID detection
quite inefficiently, by invoking duplicate_oids which does all the same
parsing of catalog headers and .dat files as genbki.pl does.  That adds
under half a second on modern machines, but quite a bit more on slow
buildfarm critters, so it seems worth avoiding.  Let's just extend
genbki.pl a little so it can also detect duplicate OIDs, and remove
the duplicate_oids call from the build process.

(This also means that duplicate OID detection will happen during
Windows builds, which AFAICS it didn't before.)

This makes the use-case for duplicate_oids a bit dubious, but it's
possible that people will still want to run that check without doing
a whole build run, so let's keep that script.

In passing, move down genbki.pl's creation of its temp output files
so that it doesn't happen until after we've done parsing and validation
of the input.  This avoids leaving a lot of clutter around after a
failure.

John Naylor and Tom Lane

Discussion: https://postgr.es/m/37D774E4-FE1F-437E-B3D2-593F314B7505@postgrespro.ru
This commit is contained in:
Tom Lane
2018-04-26 13:22:27 -04:00
parent dd4cc9d706
commit a0854f1072
4 changed files with 60 additions and 20 deletions

View File

@@ -382,8 +382,8 @@
through the catalog headers and <filename>.dat</filename> files
to see which ones do not appear. You can also use
the <filename>duplicate_oids</filename> script to check for mistakes.
(That script is run automatically at compile time, and will stop the
build if a duplicate is found.)
(<filename>genbki.pl</filename> will also detect duplicate OIDs
at compile time.)
</para>
<para>