This reverts commit 9f984ba6d23dc6eecebf479ab1d3f2e550a4e9be.
It was making the buildfarm unhappy, apparently setting client_min_messages
in a regression test produces different output if log_statement='all'.
Another issue is that I now suspect the bit sortsupport function was in
fact not correct to call byteacmp(). Revert to investigate both of those
issues.
Up to now, get_const_expr() insisted on prefixing BIT and VARBIT
literals with 'B'. That's not really necessary, because we always
append explicit-cast syntax to identify the constant's type.
Moreover, it's subtly wrong for VARBIT, because the parser will
interpret B'...' as '...'::"bit"; see make_const() which explicitly
assigns type BITOID for a T_BitString literal. So what had been
a simple VARBIT literal is reconstructed as ('...'::"bit")::varbit,
which is not the same thing, at least not before constant folding.
This results in odd differences after dump/restore, as complained
of by the patch submitter, and it could result in actual failures in
partitioning or inheritance DDL operations (see commit 542320c2b,
which repaired similar misbehaviors for some other data types).
Fixing it is pretty easy: just remove the special case and let the
default code path handle these types. We could have kept the special
case for BIT only, but there seems little point in that.
Like the previous patch, I judge that back-patching this into stable
branches wouldn't be a good idea. However, it seems not quite too
late for v11, so let's fix it there.
Paul Guo, reviewed by Davy Machado and John Naylor, minor adjustments
by me
Discussion: https://postgr.es/m/CABQrizdTra=2JEqA6+Ms1D1k1Kqw+aiBBhC9TreuZRX2JzxLAA@mail.gmail.com
inet, cidr, and timetz indexes still cannot support index-only scans,
because they don't store the original unmodified value in the index, but a
derived approximate value.