Commit b676ac443b optimized handling of tuple slots with bulk inserts
into foreign tables, so that the slots are initialized only once and
reused for all batches. The data was however copied into the slots only
after the initialization, inserting duplicate values when the slot gets
reused. Fixed by moving the ExecCopySlot outside the init branch.
The existing postgres_fdw tests failed to catch this due to inserting
data into foreign tables without unique indexes, and then checking only
the number of inserted rows. This adds a new test with both a unique
index and a check of inserted values.
Reported-by: Alexander Pyhalov
Discussion: https://postgr.es/m/7a8cf8d56b3d18e5c0bccd6cd42d04ac%40postgrespro.ru
I started out with the goal of reporting ERRCODE_CONNECTION_FAILURE
when walrcv_connect() fails, but as I looked around I realized that
whoever wrote this code was of the opinion that errcodes are purely
optional. That's not my understanding of our project policy. Hence,
make sure that an errcode is provided in each ereport that (a) is
ERROR or higher level and (b) isn't arguably an internal logic error.
Also fix some very dubious existing errcode assignments.
While this is not per policy, it's also largely cosmetic, since few
of these cases could get reported to applications. So I don't
feel a need to back-patch.
Discussion: https://postgr.es/m/2189704.1623512522@sss.pgh.pa.us
Since commit c24dcd0cfd, we have been using pg_pread() to read the WAL
file, which doesn't change the seek position (unless we fall back to
the implementation in src/port/pread.c). Update comment accordingly.
Backpatch-through: 12, where we started to use pg_pread()
Bugfix commit 5fc89376 effectively made the lock_waiter_detected field
from vacuumlazy.c's global state struct into private state owned by
lazy_truncate_heap(). Finish this off by replacing the struct field
with a local variable.
It has been spotted that multiranges lack of ability to decompose them into
individual ranges. Subscription and proper expanded object representation
require substantial work, and it's too late for v14. This commit
provides the implementation of unnest(multirange) and cast multirange as
an array of ranges, which is quite trivial.
unnest(multirange) is defined as a polymorphic procedure. The catalog
description of the cast underlying procedure is duplicated for each multirange
type because we don't have anyrangearray polymorphic type to use here.
Catversion is bumped.
Reported-by: Jonathan S. Katz
Discussion: https://postgr.es/m/flat/60258efe-bd7e-4886-82e1-196e0cac5433%40postgresql.org
Author: Alexander Korotkov
Reviewed-by: Justin Pryzby, Jonathan S. Katz, Zhihong Yu
During decoding for speculative inserts, we were relying for cleaning
toast hash on confirmation records or next change records. But that
could lead to multiple problems (a) memory leak if there is neither a
confirmation record nor any other record after toast insertion for a
speculative insert in the transaction, (b) error and assertion failures
if the next operation is not an insert/update on the same table.
The fix is to start queuing spec abort change and clean up toast hash
and change record during its processing. Currently, we are queuing the
spec aborts for both toast and main table even though we perform cleanup
while processing the main table's spec abort record. Later, if we have a
way to distinguish between the spec abort record of toast and the main
table, we can avoid queuing the change for spec aborts of toast tables.
Reported-by: Ashutosh Bapat
Author: Dilip Kumar
Reviewed-by: Amit Kapila
Backpatch-through: 9.6, where it was introduced
Discussion: https://postgr.es/m/CAExHW5sPKF-Oovx_qZe4p5oM6Dvof7_P+XgsNAViug15Fm99jA@mail.gmail.com
It was unable to wait on a backend that had already left the procarray.
Users tolerant of that limitation can poll pg_stat_activity. Other
users can employ the "timeout" argument of pg_terminate_backend().
Reviewed by Bharath Rupireddy.
Discussion: https://postgr.es/m/20210605013236.GA208701@rfd.leadboat.com
Revert the pg_description entry to its v13 form, since those messages
usually remain shorter and don't discuss individual parameters. No
catversion bump, since pg_description content does not impair backend
compatibility or application compatibility.
Justin Pryzby
Discussion: https://postgr.es/m/20210612182743.GY16435@telsasoft.com
An object found as dropped when digging into the list of objects
returned by pg_event_trigger_ddl_commands() could cause a cache lookup
error, as the calls grabbing for the object address and the type name
would fail if the object was missing.
Those lookup errors could be seen with combinations of ALTER TABLE
sub-commands involving identity columns. The lookup logic is changed in
this code path to get a behavior similar to any other SQL-callable
function by ignoring objects that are not found, taking advantage of
2a10fdc. The back-branches are not changed, as they require this commit
that is too invasive for stable branches.
While on it, add test cases to exercise event triggers with identity
columns, and stress more cases with the event ddl_command_end for
relations.
Author: Sven Klemm, Aleksander Alekseev, Michael Paquier
Discussion: https://postgr.es/m/CAMCrgp2R1cEXU53iYKtW6yVEp2_yKUz+z=3-CTrYpPP+xryRtg@mail.gmail.com
The extra checks added by the recompression of toast data introduced in
bbe0a81 is proving to have a performance impact on VACUUM or CLUSTER
even if no recompression is done. This is more noticeable with more
toastable columns that contain non-NULL values.
Improvements could be done to make those extra checks less expensive,
but that's not material for 14 at this stage, and we are not sure either
if the code path of VACUUM FULL/CLUSTER is adapted for this job.
Per discussion with several people, including Andres Freund, Robert
Haas, Álvaro Herrera, Tom Lane and myself.
Discussion: https://postgr.es/m/20210527003144.xxqppojoiwurc2iz@alap3.anarazel.de
Previously, a zero value for the relfilenode resulted in
a confusing error message about "unexpected duplicate".
This function returns NULL for other invalid relfilenode
values, so zero should be treated likewise.
It's been like this all along, so back-patch to all supported
branches.
Justin Pryzby
Discussion: https://postgr.es/m/20210612023324.GT16435@telsasoft.com
Using an Assert to check the validity of incoming messages is an
extremely poor decision. In a debug build, it should not be that easy
for a broken or malicious remote client to crash the logrep worker.
The consequences could be even worse in non-debug builds, which will
fail to make such checks at all, leading to who-knows-what misbehavior.
Hence, promote every Assert that could possibly be triggered by wrong
or out-of-order replication messages to a full test-and-ereport.
To avoid bloating the set of messages the translation team has to cope
with, establish a policy that replication protocol violation error
reports don't need to be translated. Hence, all the new messages here
use errmsg_internal(). A couple of old messages are changed likewise
for consistency.
Along the way, fix some non-idiomatic or outright wrong uses of
hash_search().
Most of these mistakes are new with the "streaming replication"
patch (commit 464824323), but a couple go back a long way.
Back-patch as appropriate.
Discussion: https://postgr.es/m/1719083.1623351052@sss.pgh.pa.us
This routine is designed to never return an empty description or NULL,
providing description fallbacks even if missing objects are accepted,
but it included a code path where this was considered possible. All the
callers of this routine already consider NULL as not possible, so
change a bit the code to map with the assumptions of the callers, and
add more comments close to the callers of this routine to outline the
behavior expected.
This code is new as of 2a10fdc, so no backpatch is needed.
Discussion: https://postgr.es/m/YMNY6RGPBRCeLmFb@paquier.xyz
941697c3c1 changed ProcArrayAdd()/Remove() substantially. As reported by
zhanyi, I missed that due to the introduction of the PGPROC->pgxactoff
ProcArrayRemove() does not need to search for the position in
procArray->pgprocnos anymore - that's pgxactoff. Remove the search loop.
The removal of the search loop reduces assertion coverage a bit - but we can
easily do better than before by adding assertions to other loops over
pgprocnos elements.
Also do a bit of cleanup, mostly by reducing line lengths by introducing local
helper variables and adding newlines.
Author: zhanyi <w@hidva.com>
Author: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/tencent_5624AA3B116B3D1C31CA9744@qq.com
We were already reporting it, but only after the parallel workers were
finished, which is visibly much later than what happens in a serial
build.
With this change we report it when the leader starts its own sort phase
when participating in the build (the normal case). Now this might
happen a little later than when the workers start their sorting phases,
but a) communicating the actual phase start from workers is likely to be
a hassle, and b) the sort phase start is pretty fuzzy anyway, since
sorting per se is actually initiated by tuplesort.c internally earlier
than tuplesort_performsort() is called.
Backpatch to pg12, where the progress reporting code for CREATE INDEX
went in.
Reported-by: Tomas Vondra <tomas.vondra@enterprisedb.com>
Author: Matthias van de Meent <boekewurm+postgres@gmail.com>
Reviewed-by: Greg Nancarrow <gregn4422@gmail.com>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/1128176d-1eee-55d4-37ca-e63644422adb
apply_handle_tuple_routing(), having detected and reported that
the tuple it needed to update didn't exist, tried to update that
tuple anyway, leading to a null-pointer dereference.
logicalrep_partition_open() failed to ensure that the
LogicalRepPartMapEntry it built for a partition was fully
independent of that for the partition root, leading to
trouble if the root entry was later freed or rebuilt.
Meanwhile, on the publisher's side, pgoutput_change() sometimes
attempted to apply execute_attr_map_tuple() to a NULL tuple.
The first of these was reported by Sergey Bernikov in bug #17055;
I found the other two while developing some test cases for this
sadly under-tested code.
Diagnosis and patch for the first issue by Amit Langote; patches
for the others by me; new test cases by me. Back-patch to v13
where this logic came in.
Discussion: https://postgr.es/m/17055-9ba800ec8522668b@postgresql.org
Commit b663a41363 introduced bulk inserts for FDW, but the handling of
tuple slots turned out to be problematic for two reasons. Firstly, the
slots were re-created for each individual batch. Secondly, all slots
referenced the same tuple descriptor - with reasonably small batches
this is not an issue, but with large batches this triggers O(N^2)
behavior in the resource owner code.
These two issues work against each other - to reduce the number of times
a slot has to be created/dropped, larger batches are needed. However,
the larger the batch, the more expensive the resource owner gets. For
practical batch sizes (100 - 1000) this would not be a big problem, as
the benefits (latency savings) greatly exceed the resource owner costs.
But for extremely large batches it might be much worse, possibly even
losing with non-batching mode.
Fixed by initializing tuple slots only once (and reusing them across
batches) and by using a new tuple descriptor copy for each slot.
Discussion: https://postgr.es/m/ebbbcc7d-4286-8c28-0272-61b4753af761%40enterprisedb.com
The code added to mark replication slots invalid in commit c655077639
had the race condition that a slot can be dropped or advanced
concurrently with checkpointer trying to invalidate it. Rewrite the
code to close those races.
The changes to ReplicationSlotAcquire's API added with c655077639 are
not necessary anymore. To avoid an ABI break in released branches, this
commit leaves that unchanged; it'll be changed in a master-only commit
separately.
Backpatch to 13, where this code first appeared.
Reported-by: Andres Freund <andres@anarazel.de>
Author: Andres Freund <andres@anarazel.de>
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/20210408001037.wfmk6jud36auhfqm@alap3.anarazel.de
We've accumulated quite a mix of instances of "an SQL" and "a SQL" in the
documents. It would be good to be a bit more consistent with these.
The most recent version of the SQL standard I looked at seems to prefer
"an SQL". That seems like a good lead to follow, so here we change all
instances of "a SQL" to become "an SQL". Most instances correctly use
"an SQL" already, so it also makes sense to use the dominant variation in
order to minimise churn.
Additionally, there were some other abbreviations that needed to be
adjusted. FSM, SSPI, SRF and a few others. Also fix some pronounceable,
abbreviations to use "a" instead of "an". For example, "a SASL" instead
of "an SASL".
Here I've only adjusted the documents and error messages. Many others
still exist in source code comments. Translator hint comments seem to be
the biggest culprit. It currently does not seem worth the churn to change
these.
Discussion: https://postgr.es/m/CAApHDvpML27UqFXnrYO1MJddsKVMQoiZisPvsAGhKE_tsKXquw%40mail.gmail.com
Commit 2453ea142 redefined pg_proc.proargtypes to include the types of
OUT parameters, for procedures only. While that had some advantages
for implementing the SQL-spec behavior of DROP PROCEDURE, it was pretty
disastrous from a number of other perspectives. Notably, since the
primary key of pg_proc is name + proargtypes, this made it possible to
have multiple procedures with identical names + input arguments and
differing output argument types. That would make it impossible to call
any one of the procedures by writing just NULL (or "?", or any other
data-type-free notation) for the output argument(s). The change also
seems likely to cause grave confusion for client applications that
examine pg_proc and expect the traditional definition of proargtypes.
Hence, revert the definition of proargtypes to what it was, and
undo a number of complications that had been added to support that.
To support the SQL-spec behavior of DROP PROCEDURE, when there are
no argmode markers in the command's parameter list, we perform the
lookup both ways (that is, matching against both proargtypes and
proallargtypes), succeeding if we get just one unique match.
In principle this could result in ambiguous-function failures
that would not happen when using only one of the two rules.
However, overloading of procedure names is thought to be a pretty
rare usage, so this shouldn't cause many problems in practice.
Postgres-specific code such as pg_dump can defend against any
possibility of such failures by being careful to specify argmodes
for all procedure arguments.
This also fixes a few other bugs in the area of CALL statements
with named parameters, and improves the documentation a little.
catversion bump forced because the representation of procedures
with OUT arguments changes.
Discussion: https://postgr.es/m/3742981.1621533210@sss.pgh.pa.us
It turns out that worker.c's code path for TRUNCATE was also
careless about establishing a snapshot while executing user-defined
code, allowing the checks added by commit 84f5c2908 to fail when
a trigger is fired in that context.
We could just wrap Push/PopActiveSnapshot around the truncate call,
but it seems better to establish a policy of holding a snapshot
throughout execution of a replication step. To help with that and
possible future requirements, replace the previous ensure_transaction
calls with pairs of begin/end_replication_step calls.
Per report from Mark Dilger. Back-patch to v11, like the previous
changes.
Discussion: https://postgr.es/m/B4A3AF82-79ED-4F4C-A4E5-CD2622098972@enterprisedb.com
Previously, we left the EPQ sub-executor alone until ExecEndLockRows.
This caused any buffer pins or other resources that it might hold to
remain held until ExecutorEnd, which in some code paths means that
they are held till the Portal is closed. That can cause user-visible
problems, such as blocking VACUUM; and it's unlike the behavior of
ordinary table-scanning nodes, which will have released all buffer
pins by the time they return an EOF indication.
We can make LockRows work more like other plan nodes by calling
EvalPlanQualEnd just before returning NULL. We still need to call it
in ExecEndLockRows in case the node was not run to completion, but in
the normal case the second call does nothing and costs little.
Per report from Yura Sokolov. In principle this is a longstanding
bug, but in view of the lack of other complaints and the low severity
of the consequences, I chose not to back-patch.
Discussion: https://postgr.es/m/4aa370cb91ecf2f9885d98b80ad1109c@postgrespro.ru
This only happens if (1) the new standby has no WAL available locally,
(2) the new standby is starting from the old timeline, (3) the promotion
happened in the WAL segment from which the new standby is starting,
(4) the timeline history file for the new timeline is available from
the archive but the WAL files for are not (i.e. this is a race),
(5) the WAL files for the new timeline are available via streaming,
and (6) recovery_target_timeline='latest'.
Commit ee994272ca introduced this
logic and was an improvement over the previous code, but it mishandled
this case. If recovery_target_timeline='latest' and restore_command is
set, validateRecoveryParameters() can change recoveryTargetTLI to be
different from receiveTLI. If streaming is then tried afterward,
expectedTLEs gets initialized with the history of the wrong timeline.
It's supposed to be a list of entries explaining how to get to the
target timeline, but in this case it ends up with a list of entries
explaining how to get to the new standby's original timeline, which
isn't right.
Dilip Kumar and Robert Haas, reviewed by Kyotaro Horiguchi.
Discussion: http://postgr.es/m/CAFiTN-sE-jr=LB8jQuxeqikd-Ux+jHiXyh4YDiZMPedgQKup0g@mail.gmail.com
PersistHoldablePortal has long assumed that it should store the
entire output of the query-to-be-persisted, which requires rewinding
and re-reading the output. This is problematic if the query is not
stable: we might get different row contents, or even a different
number of rows, which'd confuse the cursor state mightily.
In the case where the cursor is NO SCROLL, this is very easy to
solve: just store the remaining query output, without any rewinding,
and tweak the portal's cursor state to match. Aside from removing
the semantic problem, this could be significantly more efficient
than storing the whole output.
If the cursor is scrollable, there's not much we can do, but it
was already the case that scrolling a volatile query's result was
pretty unsafe. We can just document more clearly that getting
correct results from that is not guaranteed.
There are already prohibitions in place on using SCROLL with
FOR UPDATE/SHARE, which is one way for a SELECT query to have
non-stable results. We could imagine prohibiting SCROLL when
the query contains volatile functions, but that would be
expensive to enforce. Moreover, it could break applications
that work just fine, if they have functions that are in fact
stable but the user neglected to mark them so. So settle for
documenting the hazard.
While this problem has existed in some guise for a long time,
it got a lot worse in v11, which introduced the possibility
of persisting plpgsql cursors (perhaps implicit ones) even
when they violate the rules for what can be marked WITH HOLD.
Hence, I've chosen to back-patch to v11 but not further.
Per bug #17050 from Алексей Булгаков.
Discussion: https://postgr.es/m/17050-f77aa827dc85247c@postgresql.org
gram.y should discard NULL pointers (empty statements) when
assembling a routine_body_stmt_list, as it does for other
sorts of statement lists.
Julien Rouhaud and Tom Lane, per report from Noah Misch.
Discussion: https://postgr.es/m/20210606044418.GA297923@rfd.leadboat.com
The use of this function is limited to superusers and the code includes
a hardcoded check for that. However, the code would look for the PGPROC
entry to signal for the memory dump before checking if the user is a
superuser or not, which does not make sense if we know that an error
will be returned. Note that the code would let one know if a process
was a PostgreSQL process or not even for non-authorized users, which is
not the case now, but this avoids taking ProcArrayLock that will most
likely finish by being unnecessary.
Thanks to Julien Rouhaud and Tom Lane for the discussion.
Discussion: https://postgr.es/m/YLxw1uVGIAP5uMPl@paquier.xyz
Commit 19890a064e added the option to enable two_phase commits via
pg_create_logical_replication_slot but didn't extend the support of same
in replication protocol. However, by mistake, it added the two_phase
variable in CreateReplicationSlotCmd which is required only when we extend
the replication protocol.
Reported-by: Jeff Davis
Author: Ajin Cherian
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/64b9f783c6e125f18f88fbc0c0234e34e71d8639.camel@j-davis.com
In cases where run-time pruning isn't required, the synchronous and
asynchronous subplans for an async-aware Append node determined using
classify_matching_subplans() should be re-used when rescanning the node,
but the previous code re-determined them using that function repeatedly
each time when rescanning the node, leading to incorrect results in a
normal build and an Assert failure in an Assert-enabled build as that
function doesn't assume that it's called repeatedly in such cases. Fix
the code as mentioned above.
My oversight in commit 27e1f1456.
While at it, initialize async-related pointers/variables to NULL/zero
explicitly in ExecInitAppend() and ExecReScanAppend(), just to be sure.
(The variables would have been set to zero before we get to the latter
function, but let's do so.)
Reviewed-by: Kyotaro Horiguchi
Discussion: https://postgr.es/m/CAPmGK16Q4B2_KY%2BJH7rb7wQbw54AUprp7TMekGTd2T1B62yysQ%40mail.gmail.com
Other equalfuncs.c checks on CoercionForm fields use
COMPARE_COERCIONFORM_FIELD (which makes them no-ops),
but commit 40c24bfef neglected to make _equalFuncCall
do likewise. Fix that.
This is only strictly correct if FuncCall.funcformat has
no semantic effect, instead just determining ruleutils.c
display formatting. 40c24bfef added a couple of checks
in parse analysis that could break that rule; but on closer
inspection, they're redundant, so just take them out again.
Per report from Noah Misch.
Discussion: https://postgr.es/m/20210606063331.GC297923@rfd.leadboat.com
This tidies up some questionable coding which made use of
DatumGetPointer() for Datums being passed into functions where the
parameter is expected to be a cstring. We saw no compiler warnings with
the old code as the Pointer type used in DatumGetPointer() happens to
be a char * rather than a void *. However, that's no excuse and we should
be using the correct macro for the job.
Here we also make use of OutputFunctionCall() rather than using
FunctionCall1() directly to call the type's output function.
OutputFunctionCall() is the standard way to do this. It casts the
returned value to a cstring for us.
In passing get rid of a duplicate call to strlen(). Most compilers will
likely optimize away the 2nd call, but there may be some that won't. In
any case, this just aligns the code to some other nearby code that already
does this.
Discussion: https://postgr.es/m/CAApHDvq1D=ehZ8hey8Hz67N+_Zth0GHO5wiVCfv1YcGPMXJq0A@mail.gmail.com
The documented intent is for all columns except subconninfo to be
publicly readable. However, this has been overlooked twice.
subsynccommit has never been readable since it was introduced,
nor has the oid column (which is important for joining).
Given the lack of previous complaints, it's not clear that it's
worth doing anything about this in the back branches. But there's
still time to fix it inexpensively for v14.
Per report from Israel Barth (via Euler Taveira).
Patch by Euler Taveira, possibly-vain comment updates by me.
Discussion: https://postgr.es/m/b8f7c17c-0041-46b6-acfe-2d1f5a985ab4@www.fastmail.com
The internal SQL queries used by REFRESH MATERIALIZED VIEW CONCURRENTLY
include some aliases for its diff and temporary relations with
rather-generic names: diff, newdata, newdata2 and mv. Depending on the
queries used for the materialized view, using CONCURRENTLY could lead to
some internal failures if the matview query and those internal aliases
conflict.
Those names have been chosen in 841c29c8. This commit switches instead
to a naming pattern which is less likely going to cause conflicts, based
on an idea from Thomas Munro, by appending _$ to those aliases. This is
not perfect as those new names could still conflict, but at least it has
the advantage to keep the code readable and simple while reducing the
likelihood of conflicts to be close to zero.
Reported-by: Mathis Rudolf
Author: Bharath Rupireddy
Reviewed-by: Bernd Helmle, Thomas Munro, Michael Paquier
Discussion: https://postgr.es/m/109c267a-10d2-3c53-b60e-720fcf44d9e8@credativ.de
Backpatch-through: 9.6
Fix a few places that were using appendStringInfo() when they should have
been using appendStringInfoString(). Also some cases of
appendPQExpBuffer() that would have been better suited to use
appendPQExpBufferChar(), and finally, some places that used
appendPQExpBuffer() when appendPQExpBufferStr() would have suited better.
There are no bugs are being fixed here. The aim is just to make the code
use the most optimal function for the job.
All the code being changed here is new to PG14. It makes sense to fix
these before we branch for PG15. There are a few other places that we
could fix, but those cases are older code so fixing those seems less
worthwhile as it may cause unnecessary back-patching pain in the future.
Author: Hou Zhijie
Discussion: https://postgr.es/m/OS0PR01MB5716732158B1C4142C6FE375943D9@OS0PR01MB5716.jpnprd01.prod.outlook.com
Commit 428b260f8 broke planning of cases where row marks are needed
(SELECT FOR UPDATE, etc) and one of the query's tables is a foreign
table that has regular table(s) as inheritance children. We got the
reverse case right, but apparently were thinking that foreign tables
couldn't be inheritance parents. Not so; so we need to be able to
add a CTID junk column while adding a new child, not only a wholerow
junk column.
Back-patch to v12 where the faulty code came in.
Amit Langote
Discussion: https://postgr.es/m/CA+HiwqEmo3FV1LAQ4TVyS2h1WM=kMkZUmbNuZSCnfHvMcUcPeA@mail.gmail.com
This case should be disallowed, just as FOR UPDATE with a plain
GROUP BY is disallowed; FOR UPDATE only makes sense when each row
of the query result can be identified with a single table row.
However, we missed teaching CheckSelectLocking() to check
groupingSets as well as groupClause, so that it would allow
degenerate grouping sets. That resulted in a bad plan and
a null-pointer dereference in the executor.
Looking around for other instances of the same bug, the only one
I found was in examine_simple_variable(). That'd just lead to
silly estimates, but it should be fixed too.
Per private report from Yaoguang Chen.
Back-patch to all supported branches.
Release memory allocated when creating the tuple-conversion map and its
component TupleDescs when its owning sync entry is invalidated.
TupleDescs must also be freed when no map is deemed necessary, to begin
with.
Reported-by: Andres Freund
Author: Amit Langote
Reviewed-by: Takamichi Osumi, Amit Kapila
Backpatch-through: 13, where it was introduced
Discussion: https://postgr.es/m/MEYP282MB166933B1AB02B4FE56E82453B64D9@MEYP282MB1669.AUSP282.PROD.OUTLOOK.COM
The RADIUS-related checks in parse_hba_line() did not respect elevel
and did not fill in *err_msg. Also, verify_option_list_length()
pasted together error messages in an untranslatable way. To fix the
latter, remove the function and do the error checking inline. It's a
bit more verbose but only minimally longer, and it makes fixing the
first two issues straightforward.
Reviewed-by: Magnus Hagander <magnus@hagander.net>
Discussion: https://www.postgresql.org/message-id/flat/8381e425-8c23-99b3-15ec-3115001db1b2%40enterprisedb.com