This was broken in commit 9c7d06d60680, which inadvertently gave the
wrong value to fast_forward in one StartupDecodingContext call. Fix by
flipping the value. Add a test for the obvious error, namely trying to
initialize a replication slot with an nonexistent output plugin.
While at it, move the CreateDecodingContext call earlier, so that any
errors are reported before sending the CopyBoth message.
Author: Dave Cramer <davecramer@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CADK3HHLVkeRe1v4P02-5hj55H3_yJg3AEtpXyEY5T3wuzO2jSg@mail.gmail.com
This allows out-of-tree PLs and similar code to get access to
definitions needed to work with extension data types.
The following existing modules now install headers: contrib/cube,
contrib/hstore, contrib/isn, contrib/ltree, contrib/seg.
Discussion: https://postgr.es/m/87y3euomjh.fsf%40news-spur.riddles.org.uk
Affected test queries have been testing the wrong thing since their
introduction in commit 4c1383efd132e4f532213c8a8cc63a455f55e344.
Back-patch to 9.3 (all supported versions).
A collection of typos I happened to spot while reading code, as well as
grepping for common mistakes.
Backpatch to all supported versions, as applicable, to avoid conflicts
when backporting other commits in the future.
In final_cost_hashjoin(), commit 9c7f5229a allowed inner_unique cases
to follow a code path previously used only for SEMI/ANTI joins; but it
neglected to fix an if-test within that path that assumed SEMI and ANTI
were the only possible cases. This resulted in a wrong value for
hashjointuples, and an ensuing bad cost estimate, for inner_unique normal
joins. Fortunately, for inner_unique normal joins we can assume the number
of joined tuples is the same as for a SEMI join; so there's no need for
more code, we just have to invert the test to check for ANTI not SEMI.
It turns out that in two contrib tests in which commit 9c7f5229a
changed the plan expected for a query, the change was actually wrong
and induced by this estimation error, not by any real improvement.
Hence this patch also reverts those changes.
Per report from RK Korlapati. Backpatch to v10 where the error was
introduced.
David Rowley
Discussion: https://postgr.es/m/CA+SNy03bhq0fodsfOkeWDCreNjJVjsdHwUsb7AG=jpe0PtZc_g@mail.gmail.com
lca_inner() wasn't prepared for the possibility of getting no inputs.
Fix that, and make some cosmetic improvements to the code while at it.
Also, I thought the documentation of this function as returning the
"longest common prefix" of the paths was entirely misleading; it really
returns a path one shorter than the longest common prefix, for the typical
definition of "prefix". Don't use that term in the docs, and adjust the
examples to clarify what really happens.
This has been broken since its beginning, so back-patch to all supported
branches.
Per report from Hailong Li. Thanks to Pierre Ducroquet for diagnosing
and for the initial patch, though I whacked it around some and added
test cases.
Discussion: https://postgr.es/m/5b0d8e4f-f2a3-1305-d612-e00e35a7be66@qunar.com
Such replication slots are physical slots freshly created without WAL
being reserved, which is the default behavior, which have not been used
yet as WAL consumption resources to retain WAL. This prevents advancing
a slot to a position older than any WAL available, which could falsify
calculations for WAL segment recycling.
This also cleans up a bit the code, as ReplicationSlotRelease() would be
called on ERROR, and improves error messages.
Reported-by: Kyotaro Horiguchi
Author: Michael Paquier
Reviewed-by: Andres Freund, Álvaro Herrera, Kyotaro Horiguchi
Discussion: https://postgr.es/m/20180626071305.GH31353@paquier.xyz
Commit 716ea626a attempted to fix the problem of building 1-D zero-size
arrays once and for all. But it turns out that contrib/intarray has some
code that doesn't use construct_array() but just builds arrays by hand,
so it didn't get the memo. This appears to affect all of subarray(),
intset_subtract(), inner_int_union(), inner_int_inter(), and
intarray_concat_arrays().
Back-patch into v11. In the past we've not back-patched this type of
change, but since v11 is still in beta it seems all right to include
this fix in it. Besides it's more consistent to make the fix in v11
where 716ea626a appeared.
Report and patch by Alexey Kryuchkov, some cosmetic adjustments by me
Report: https://postgr.es/m/153053285112.13258.434620894305716755@wrigleys.postgresql.org
Discussion: https://postgr.es/m/CAN85JcYphDLYt4CpMDLZjjNVqGDrFJ5eS3YF=wLAhFoDQuBsyg@mail.gmail.com
Change a whole-database VACUUM into doing just pg_attribute, which is
the portion that verifies what we want it to do. The original
formulation wastes a lot of CPU time, which leads the test to fail when
runtime exceeds isolationtester timeout when it's super-slow, such as
under CLOBBER_CACHE_ALWAYS. Per buildfarm member friarbird.
It turns out that the previous shape of the test doesn't always detect
the condition it is supposed to detect (on unpatched reorderbuffer
code): the reason is that there is a good chance of encountering a
xl_running_xacts record (logged every 15 seconds) before the checkpoint
-- and because we advance the xmin when we receive that WAL record, and
we *don't* advance the xmin twice consecutively without receiving a
client message in between, that means the xmin is not advanced enough
for the tuple to be pruned from pg_attribute by VACUUM. So the test
would spuriously pass.
The reason this test deficiency wasn't detected earlier is that HOT
pruning removes the tuple anyway, even if vacuum leaves it in place, so
the test correctly fails (detecting the coding mistake), but for the
wrong reason.
To fix this mess, run the s0_get_changes step twice before vacuum
instead of once: this seems to cause the xmin to be advanced reliably,
wreaking havoc with more certainty.
Author: Arseny Sher
Discussion: https://postgr.es/m/87h8lkuxoa.fsf@ars-thinkpad
search.cpan.org has been EOL'd, with metacpan.org being the official
replacement to which URLs now redirect. Update links to match the new
URL. Also update links to CPAN to use https as it will redirect from
http.
Author: Daniel Gustafsson
Discussion: https://postgr.es/m/B74C0219-6BA9-46E1-A524-5B9E8CD3BDB3@yesql.se
Two closely related bugs are fixed. First, xmin of logical slots was
advanced too early. During xl_running_xacts processing, xmin of the
slot was set to the oldest running xid in the record, but that's wrong:
actually, snapshots which will be used for not-yet-replayed transactions
might consider older txns as running too, so we need to keep xmin back
for them. The problem wasn't noticed earlier because DDL which allows
to delete tuple (set xmax) while some another not-yet-committed
transaction looks at it is pretty rare, if not unique: e.g. all forms of
ALTER TABLE which change schema acquire ACCESS EXCLUSIVE lock
conflicting with any inserts. The included test case (test_decoding's
oldest_xmin) uses ALTER of a composite type, which doesn't have such
interlocking.
To deal with this, we must be able to quickly retrieve oldest xmin
(oldest running xid among all assigned snapshots) from ReorderBuffer. To
fix, add another list of ReorderBufferTXNs to the reorderbuffer, where
transactions are sorted by base-snapshot-LSN. This is slightly
different from the existing (sorted by first-LSN) list, because a
transaction can have an earlier LSN but a later Xmin, if its first
record does not obtain an xmin (eg. xl_xact_assignment). Note this new
list doesn't fully replace the existing txn list: we still need that one
to prevent WAL recycling.
The second issue concerns SnapBuilder snapshots and subtransactions.
SnapBuildDistributeNewCatalogSnapshot never assigned a snapshot to a
transaction that is known to be a subtxn, which is good in the common
case that the top-level transaction already has one (no point in doing
so), but a bug otherwise. To fix, arrange to transfer the snapshot from
the subtxn to its top-level txn as soon as the kinship gets known.
test_decoding's snapshot_transfer verifies this.
Also, fix a minor memory leak: refcount of toplevel's old base snapshot
was not decremented when the snapshot is transferred from child.
Liberally sprinkle code comments, and rewrite a few existing ones. This
part is my (Álvaro's) contribution to this commit, as I had to write all
those comments in order to understand the existing code and Arseny's
patch.
Reported-by: Arseny Sher <a.sher@postgrespro.ru>
Diagnosed-by: Arseny Sher <a.sher@postgrespro.ru>
Co-authored-by: Arseny Sher <a.sher@postgrespro.ru>
Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Antonin Houska <ah@cybertec.at>
Discussion: https://postgr.es/m/87lgdyz1wj.fsf@ars-thinkpad
Values greater than IV_MAX were incorrectly converted to SQL,
for instance ~0 would become -1 rather than 18446744073709551615
(on a 64-bit machine).
Dagfinn Ilmari Mannsåker, adjusted a bit by me
Discussion: https://postgr.es/m/d8jtvskjzzs.fsf@dalvik.ping.uio.no
Bring this transform function into sync with the policy established
by commit 3a382983d.
Also, fix it to make sure that what it drills down to is indeed a
hash, and not some other kind of Perl SV. Previously, the test
cases added here provoked crashes.
Because of the crash hazard, back-patch to 9.5 where this module
was introduced.
Discussion: https://postgr.es/m/28336.1528393969@sss.pgh.pa.us
Per buildfarm, the output from Data::Dumper for an IEEE infinity
is platform-dependent (e.g. "inf" vs "Inf"). Just skip that one
test case in the plperlu test; testing it on the plperl side is
coverage enough. Fixes issue in commit 1731e3741.
We want, say, 2 to be transformed as 2, not \\2 which is what the
original coding produced. Perl's standard seems to be to add an RV
wrapper only for hash and array SVs, so do it like that.
This was missed originally because the test cases only checked what came
out of a round trip back to SQL, and the strip-all-dereferences loop at
the top of SV_to_JsonbValue hides the extra refs from view. As a better
test, print the Perl value with Data::Dumper, like the hstore_plperlu
tests do. While we can't do that in the plperl test, only plperlu,
that should be good enough because this code is the same for both PLs.
But also add a simplistic test for extra REFs, which we can do in both.
That strip-all-dereferences behavior is now a bit dubious; it's unlike
what happens for other Perl-to-SQL conversions. However, the best
thing to do seems to be to leave it alone and make the other conversions
act similarly. That will be done separately.
Dagfinn Ilmari Mannsåker, adjusted a bit by me
Discussion: https://postgr.es/m/d8jlgbq66t9.fsf@dalvik.ping.uio.no
PyObject returned from PySequence_GetItem() is not released. Similar code in PLyMapping_ToJsonbValue() is correct, because according to Python documentation
PyList_GetItem() and PyTuple_GetItem() return a borrowed reference while
PySequence_GetItem() returns new reference. contrib/jsonb_plpython is new
in PostgreSQL 11, no backpatch is needed.
Author: Nikita Glukhov
Discussion: https://postgr.es/m/6001af16-b242-2527-bc7e-84b8a959163b%40postgrespro.ru
This complies with the perlcritic policy
Subroutines::RequireFinalReturn, which is a severity 4 policy. Since we
only currently check at severity level 5, the policy is raised to that
level until we move to level 4 or lower, so that any new infringements
will be caught.
A small cosmetic piece of tidying of the pgperlcritic script is
included.
Mike Blackwell
Discussion: https://postgr.es/m/CAESHdJpfFm_9wQnQ3koY3c91FoRQsO-fh02za9R3OEMndOn84A@mail.gmail.com
Print columns as "column C of <relation>" rather than "<relation> column
C". This seems to read noticeably better in English, as evidenced by the
regression test output changes, and the code change also makes it possible
for translators to adjust the phrase order in other languages.
Also change the output for OCLASS_DEFAULT from "default for %s" to
"default value for %s". This seems to read better and is also more
consistent with the output of, for instance, getObjectTypeDescription().
Kyotaro Horiguchi, per a complaint from me
Discussion: https://postgr.es/m/20180522.182020.114074746.horiguchi.kyotaro@lab.ntt.co.jp
The "l" (ell) width spec means something in the corresponding scanf usage,
but not here. While modern POSIX says that applying "l" to "f" and other
floating format specs is a no-op, SUSv2 says it's undefined. Buildfarm
experience says that some old compilers emit warnings about it, and at
least one old stdio implementation (mingw's "ANSI" option) actually
produces wrong answers and/or crashes.
Discussion: https://postgr.es/m/21670.1526769114@sss.pgh.pa.us
Discussion: https://postgr.es/m/c085e1da-0d64-1c15-242d-c921f32e0d5c@dunslane.net
Previously, we passed the toplevel PlannerInfo, but we actually want
to pass the relevant subroot. One problem with passing the toplevel
PlannerInfo is that the FDW which wants to push down an UPDATE or
DELETE against a join won't find the relevant joinrel there.
As of commit 1bc0100d270e5bcc980a0629b8726a32a497e788, postgres_fdw
tries to do exactly this and can be made to fail an assertion as a
result.
It's possible that this should be regarded as a bug fix and
back-patched to earlier releases, but for lack of a test case that
fails in earlier releases, no back-patch for now.
Etsuro Fujita, reviewed by Amit Langote.
Discussion: http://postgr.es/m/5AF43E02.30000@lab.ntt.co.jp
Commit 8b08f7d4820f failed to update these modules to at least give
non-broken error messages for partitioned indexes. Add appropriate
error support to them.
Peter G. was complaining about a problem of unfriendly error messages;
while we haven't fixed that yet, subsequent discussion let to discovery
of these unhandled cases.
Author: Michaël Paquier
Reported-by: Peter Geoghegan
Discussion: https://postgr.es/m/CAH2-WzkOKptQiE51Bh4_xeEHhaBwHkZkGtKizrFMgEkfUuRRQg@mail.gmail.com
Fix the lone error message in the whole source tree to use capitalized
HASH when referring to hash indexes, making it look like all the other
messages.
Someday it would be good to standardize 'B-Tree', 'B-tree', 'btree', and
random other spellings, too, but that's a larger patch ...
Author: Álvaro Herrera
In 9.6, we moved a number of functions over to using the GRANT system to
control access instead of having hard-coded superuser checks.
As it turns out, adminpack was creating another function in the catalog
for one of those backend functions where the superuser check was
removed, specifically pg_rotate_logfile(), but it didn't get the memo
about having to REVOKE EXECUTE on the alternative-name function
(pg_logfile_rotate()), meaning that in any installations with adminpack
on 9.6 and higher, any user is able to run the pg_logfile_rotate()
function, which then calls pg_rotate_logfile() and rotates the logfile.
Fix by adding a new version of adminpack (1.1) which handles the REVOKE.
As this function should have only been available to the superuser, this
is a security issue, albeit a minor one.
In HEAD, move the changes implemented for adminpack up to be adminpack
2.0 instead of 1.1.
Security: CVE-2018-1115
autoprewarm.c mostly considered the number of blocks it might be dealing
with as being int64. This is unnecessary, because NBuffers is declared
as int, and there's been no suggestion that we might widen it in the
foreseeable future. Moreover, using int64 is problematic because the
code expected INT64_FORMAT to work with fscanf(), something we don't
guarantee, and which indeed fails on some older buildfarm members.
On top of that, the module randomly used uint32 rather than int64 variables
to hold block counters in several places, so it would fail anyway if we
ever did have NBuffers wider than that; and it also supposed that pg_qsort
could sort an int64 number of elements, which is wrong on 32-bit machines
(though no doubt a 32-bit machine couldn't actually have that many
buffers).
Hence, change all these variables to plain int.
In passing, avoid shadowing one variable named i with another,
and avoid casting away const in apw_compare_blockinfo.
Discussion: https://postgr.es/m/7773.1525288909@sss.pgh.pa.us
As in e348e7ae5727a6da8678036d748e5c5af7deb6c9 for jsonb/plperl, prevent
putting a NaN into a jsonb numeric field.
Tests for this had been removed in
6278a2a262b63faaf47eb2371f6bcb5b6e3ff118, but in case they are ever
resurrected: This would change the output of the test1nan() function to
an error.
Failure to use DatumGetFoo/FooGetDatum macros correctly, or at all,
causes some warnings about sign conversion. This is just cosmetic
at the moment but in principle it's a type violation, so clean up
the instances I could find.
autoprewarm.c and sharedfileset.c contained code that unportably
assumed that pid_t is the same size as int. We've variously dealt
with this by casting pid_t to int or to unsigned long for printing
purposes; I went with the latter.
Fix uninitialized-variable warning in RestoreGUCState. This is
a live bug in some sense, but of no great significance given that
nobody is very likely to care what "line number" is associated with
a GUC that hasn't got a source file recorded.
Recent gcc can warn about switch-case fall throughs that are not
explicitly labeled as intentional. This seems like a good thing,
so clean up the warnings exposed thereby by labeling all such
cases with comments that gcc will recognize.
In files that already had one or more suitable comments, I generally
matched the existing style of those. Otherwise I went with
/* FALLTHROUGH */, which is one of the spellings approved at the
more-restrictive-than-default level -Wimplicit-fallthrough=4.
(At the default level you can also spell it /* FALL ?THRU */,
and it's not picky about case. What you can't do is include
additional text in the same comment, so some existing comments
containing versions of this aren't good enough.)
Testing with gcc 8.0.1 (Fedora 28's current version), I found that
I also had to put explicit "break"s after elog(ERROR) or ereport(ERROR);
apparently, for this purpose gcc doesn't recognize that those don't
return. That seems like possibly a gcc bug, but it's fine because
in most places we did that anyway; so this amounts to a visit from the
style police.
Discussion: https://postgr.es/m/15083.1525207729@sss.pgh.pa.us
Without these fixes, changes to the inserted tuple made by remote
triggers are ignored when building local RETURNING tuples.
In the core code, call ExecInitRoutingInfo at a later point from
within ExecInitPartitionInfo so that the FDW callback gets invoked
after the returning list has been built. But move CheckValidResultRel
out of ExecInitRoutingInfo so that it can happen at an earlier stage.
In postgres_fdw, refactor assorted deparsing functions to work with
the RTE rather than the PlannerInfo, which saves us having to
construct a fake PlannerInfo in cases where we don't have a real one.
Then, we can pass down a constructed RTE that yields the correct
deparse result when no real one exists. Unfortunately, this
necessitates a hack that understands how the core code manages RT
indexes for update tuple routing, which is ugly, but we don't have a
better idea right now.
Original report, analysis, and patch by Etsuro Fujita. Heavily
refactored by me. Then worked over some more by Amit Langote.
Discussion: http://postgr.es/m/5AD4882B.10002@lab.ntt.co.jp
It turns out that old Perl versions (before about 5.10) don't have any
very reliable way to generate Inf or NaN numeric values. Getting around
that would require way more work than is really justified to test the
code involved, so let's just drop these new test cases.
Discussion: https://postgr.es/m/28585.1525131438@sss.pgh.pa.us
jsonb uses numeric internally, and numeric can store NaN, but that is
not allowed by jsonb on input, so we shouldn't store it. Also prevent
infinity to get a consistent error message. (numeric input would reject
infinity anyway.)
Reported-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Update typedefs.list from current buildfarm results. Adjust pgindent's
typedef blacklist to block some more unfortunate typedef names that have
snuck in since last time. Manually tweak a few places where I didn't
like the initial results of pgindent'ing.
Also use palloc0() for main amcheck state, and adjust a few comments.
Somehow I pushed old version of patch in commit
4eaf7eaccb291f5d32d9f05284bfca9c5744de9a, so commit the difference.
Peter Geoghegan
When bt_index_parent_check() is called with the heapallindexed option,
allocate a second Bloom filter to fingerprint block numbers that appear
in the downlinks of internal pages. Use Bloom filter probes when
walking the B-Tree to detect missing downlinks. This can detect subtle
problems with page deletion/VACUUM, such as corruption caused by the bug
just fixed in commit 6db4b499.
The downlink Bloom filter is bound in size by work_mem. Its optimal
size is typically far smaller than that of the regular heapallindexed
Bloom filter, especially when the index has high fan-out.
Author: Peter Geoghegan
Reviewer: Teodor Sigaev
Discussion: https://postgr.es/m/CAH2-WznUzY4fWTjm1tBB3JpVz8cCfz7k_qVp5BhuPyhivmWJFg@mail.gmail.com
On further reflection, commit e5d83995e didn't go far enough: pretty much
everywhere in the planner that examines a clause's is_pushed_down flag
ought to be changed to use the more complicated behavior where we also
check the clause's required_relids. Otherwise we could make incorrect
decisions about whether, say, a clause is safe to use as a hash clause.
Some (many?) of these places are safe as-is, either because they are
never reached while considering a parameterized path, or because there
are additional checks that would reject a pushed-down clause anyway.
However, it seems smarter to just code them all the same way rather
than rely on easily-broken reasoning of that sort.
In support of that, invent a new macro RINFO_IS_PUSHED_DOWN that should
be used in place of direct tests on the is_pushed_down flag.
Like the previous patch, back-patch to all supported branches.
Discussion: https://postgr.es/m/f8128b11-c5bf-3539-48cd-234178b2314d@proxel.se
Add several assertions that ensure that we're dealing with a pivot tuple
without non-key attributes where that's expected. Also, remove the
assertion within _bt_isequal(), restoring the v10 function signature. A
similar check will be performed for the page highkey within
_bt_moveright() in most cases. Also avoid dropping all objects within
regression tests, to increase pg_dump test coverage for INCLUDE indexes.
Rather than using infrastructure that's generally intended to be used
with reference counted heap tuple descriptors during truncation, use the
same function that was introduced to store flat TupleDescs in shared
memory (we use a temp palloc'd buffer). This isn't strictly necessary,
but seems more future-proof than the old approach. It also lets us
avoid including rel.h within indextuple.c, which was arguably a
modularity violation. Also, we now call index_deform_tuple() with the
truncated TupleDesc, not the source TupleDesc, since that's more robust,
and saves a few cycles.
In passing, fix a memory leak by pfree'ing truncated pivot tuple memory
during CREATE INDEX. Also pfree during a page split, just to be
consistent.
Refactor _bt_check_natts() to be more readable.
Author: Peter Geoghegan with some editorization by me
Reviewed by: Alexander Korotkov, Teodor Sigaev
Discussion: https://www.postgresql.org/message-id/CAH2-Wz%3DkCWuXeMrBCopC-tFs3FbiVxQNjjgNKdG2sHxZ5k2y3w%40mail.gmail.com
Spelling access(2)'s second argument as "2" is just horrid.
POSIX makes no promises as to the numeric values of W_OK and related
macros. Even if it accidentally works as intended on every supported
platform, it's still unreadable and inconsistent with adjacent code.
In passing, don't spell "NULL" as "0" either. Yes, that's legal C;
no, it's not project style.
Back-patch, just in case the unportability is real and not theoretical.
(Most likely, even if a platform had different bit assignments for
access()'s modes, there'd not be an observable behavior difference
here; but I'm being paranoid today.)
We've made multiple attempts to stabilize the plans shown by commit
1bc0100d2, with little success so far. The reason for the remaining
instability seems to be that if a transaction (such as auto-analyze)
is running concurrently with the test, then get_actual_variable_range may
return a maximum value for "T 1"."C 1" that's far away from the actual max,
as a result of our having transiently inserted such a value earlier in
the test. Because we use a non-MVCC snapshot to fetch the value (for
performance reasons), the presence of other transactions can cause that
function to return entries that are actually dead.
To fix, use a less extreme value in the earlier transient insertion, so
that whether it is visible or not won't affect the selectivity estimate.
The use of 9999 there seems to have been picked with the aid of a
dartboard anyway, rather than having a specific reason.
Discussion: https://postgr.es/m/16962.1523551784@sss.pgh.pa.us
This reverts commits d204ef63776b8a00ca220adec23979091564e465,
83454e3c2b28141c0db01c7d2027e01040df5249 and a few more commits thereafter
(complete list at the end) related to MERGE feature.
While the feature was fully functional, with sufficient test coverage and
necessary documentation, it was felt that some parts of the executor and
parse-analyzer can use a different design and it wasn't possible to do that in
the available time. So it was decided to revert the patch for PG11 and retry
again in the future.
Thanks again to all reviewers and bug reporters.
List of commits reverted, in reverse chronological order:
f1464c5380 Improve parse representation for MERGE
ddb4158579 MERGE syntax diagram correction
530e69e59b Allow cpluspluscheck to pass by renaming variable
01b88b4df5 MERGE minor errata
3af7b2b0d4 MERGE fix variable warning in non-assert builds
a5d86181ec MERGE INSERT allows only one VALUES clause
4b2d44031f MERGE post-commit review
4923550c20 Tab completion for MERGE
aa3faa3c7a WITH support in MERGE
83454e3c2b New files for MERGE
d204ef6377 MERGE SQL Command following SQL:2016
Author: Pavan Deolasee
Reviewed-by: Michael Paquier
In commit 9c0a0de4c, I'd failed to notice that catalog/catalog.h
should also be considered a frontend-unsafe header, because it includes
(and needs) the full form of pg_class.h, not to mention relcache.h.
However, various frontend code was depending on it to get
TABLESPACE_VERSION_DIRECTORY, so refactoring of some sort is called for.
The cleanest answer seems to be to move TABLESPACE_VERSION_DIRECTORY,
as well as the OIDCHARS symbol, to common/relpath.h. Do that, and mop up
inclusions as necessary. (I found that quite a few current users of
catalog/catalog.h don't seem to need it at all anymore, apparently as a
result of the refactorings that created common/relpath.[hc]. And
initdb.c needed it only as a route to pg_class_d.h.)
Discussion: https://postgr.es/m/6629.1523294509@sss.pgh.pa.us
Repeating these tests adds unnecessary cycles, since no improvement in
test coverage is expected.
Cleanup from commit 8224de4f42ccf98e08db07b43d52fed72f962ebb.
Peter Geoghegan