Commit aa0105141 repeated one of the oldest mistakes in our book:
thinking that OID is the same as int32. It isn't of course, and
unsurprisingly the first person who came along with a database
OID above 2 billion broke it. Repair.
Per bug #17677 from Sergey Pankov. Back-patch to v15.
Discussion: https://postgr.es/m/17677-a99fa067d7ed71c9@postgresql.org
"Triggers on partitioned tables cannot have transition tables." is
incorrect as we allow statement-level triggers on partitioned tables to
have transition tables.
This has been wrong since commit 86f575948; back-patch to v11 where that
commit came in.
Reviewed by Tom Lane.
Discussion: https://postgr.es/m/CAPmGK17gk4vXLzz2iG%2BG4LWRWCoVyam70nZ3OuGm1hMJwDrhcg%40mail.gmail.com
Commit f56f8f8da6 added some code in CloneFkReferencing that's way too
lax about a Constraint node it manufactures, not initializing enough
struct members -- initially_valid in particular was forgotten. This
causes some FKs in partitions added by ALTER TABLE ATTACH PARTITION to
be marked as not validated. Set initially_valid true, which fixes the
bug.
While at it, make the struct initialization more complete. Very similar
code was added in two other places by the same commit; make them all
follow the same pattern for consistency, though no bugs are apparent
there.
This bug has never been reported: I only happened to notice while
working on commit 614a406b4f. The test case that was added there with
the improper result is repaired.
Backpatch to 12.
Discussion: https://postgr.es/m/20221005105523.bhuhkdx4olajboof@alvherre.pgsql
If a syntax error occurred in a SQL-language or PL/pgSQL-language
CREATE FUNCTION or DO command executed in a logical replication worker,
we'd suffer a null pointer dereference or assertion failure. That
seems like a rather contrived case, but nonetheless worth fixing.
The cause is that function_parse_error_transpose assumes it must be
executing within the context of a Portal, but logical/worker.c
doesn't create a Portal since it's not running the standard executor.
We can just back off the hard Assert check and make it fail gracefully
if there's not an ActivePortal. (I have a feeling that the aggressive
check here was my fault originally, probably because I wasn't sure if
the case would always hold and wanted to find out. Well, now we know.)
The hazard seems to exist in all branches that have logical replication,
so back-patch to v10.
Maxim Orlov, Anton Melnikov, Masahiko Sawada, Tom Lane
Discussion: https://postgr.es/m/b570c367-ba38-95f3-f62d-5f59b9808226@inbox.ru
Discussion: https://postgr.es/m/adf0452f-8c6b-7def-d35e-ab516c80088e@inbox.ru
Since partitions can be foreign tables not only plain tables, but
logical replication only supports plain tables, we'd better check the
relkind of a partition after we find it. (There was some discussion
of checking this when adding a partitioned table to a subscription;
but that would be inadequate since the troublesome partition could be
added later.) Without this, the situation leads to a segfault or
assertion failure.
In passing, add a separate variable for the target Relation of
a cross-partition UPDATE; reusing partrel seemed mighty confusing
and error-prone.
Shi Yu and Tom Lane, per report from Ilya Gladyshev. Back-patch
to v13 where logical replication into partitioned tables became
a thing.
Discussion: https://postgr.es/m/6b93e3748ba43298694f376ca8797279d7945e29.camel@gmail.com
Previously, the description of XLOG_RUNNING_XACTS showed only
top-transaction XIDs and whether subtransactions overflowed. This commit
improves it to show individual subtransaction XIDs. This also improves the
description of overflowed subtransactions.
This additional information can be helpful for testing and debugging
purposes.
Author: Masahiko Sawada
Reviewd by: Fujii Masao, Kyotaro Horiguchi, Ashutosh Bapat, Bharath Rupireddy
Discussion: https://postgr.es/m/CAD21AoAqvaE+XEeXHHPdAGQPcCoGXxuoeutq_nWhUSQvTt5+tA@mail.gmail.com
These two options are only available with COPY FROM, so the extra logic
in charge of checking the validity of the attributes given has no
purpose.
Author: Zhang Mingli
Reviewed-by: Richard Guo, Kyotaro Horiguchi
Discussion: https://postgr.es/m/F28F0B5A-766F-4D33-BF44-43B3A052D833@gmail.com
We have various requirements when using a dlist_head to keep track of the
number of items in the list. This, traditionally, has been done by
maintaining a counter variable in the calling code. Here we tidy this up
by adding "dclist", which is very similar to dlist but also keeps track of
the number of items stored in the list.
Callers may use the new dclist_count() function when they need to know how
many items are stored. Obtaining the count is an O(1) operation.
For simplicity reasons, dclist and dlist both use dlist_node as their node
type and dlist_iter/dlist_mutable_iter as their iterator type. dclists
have all of the same functionality as dlists except there is no function
named dclist_delete(). To remove an item from a list dclist_delete_from()
must be used. This requires knowing which dclist the given item is stored
in.
Additionally, here we also convert some dlists where additional code
exists to keep track of the number of items stored and to make these use
dclists instead.
Author: David Rowley
Reviewed-by: Bharath Rupireddy, Aleksander Alekseev
Discussion: https://postgr.es/m/CAApHDvrtVxr+FXEX0VbViCFKDGxA3tWDgw9oFewNXCJMmwLjLg@mail.gmail.com
Some cases would result in "cache lookup failed for statistics object",
due to trying to fetch inherited statistics when only non-inherited
ones are available or vice versa.
Richard Guo and Justin Pryzby
Discussion: https://postgr.es/m/20221030170520.GM16921@telsasoft.com
This commit adds a function to perform a cross-check between the initial
value of the C declaration associated to a GUC and its actual boot
value in assert-enabled builds. The purpose of this is to prevent
anybody reading these C declarations from being fooled by mismatched
values before they are loaded at program startup.
The following rules apply depending on the GUC type:
* bool - can be false, or same as boot_val.
* int - can be 0, or same as the boot_val.
* real - can be 0.0, or same as the boot_val.
* string - can be NULL, or strcmp'd equal to the boot_val.
* enum - equal to the boot_val.
This is done for the system as well custom GUCs loaded by external
modules, which may require extension developers to adapt the C
declaration of the variables used by these GUCs (testing this change
with some of my own modules has allowed me to catch some stupid typos,
FWIW). This may finish by being a bad experiment depending on the
feedbcak received, but let's see how it goes.
Author: Peter Smith
Reviewed-by: Nathan Bossart, Tom Lane, Michael Paquier, Justin Pryzby
Discussion: https://postgr.es/m/CAHut+PtHE0XSfjjRQ6D4v7+dqzCw=d+1a64ujra4EX8aoc_Z+w@mail.gmail.com
This is similar to 7d25958, and this commit takes care of all the
remaining inconsistencies between the initial value used in the C
variable associated to a GUC and its default value stored in the GUC
tables (as of pg_settings.boot_val).
Some of the initial values of the GUCs updated rely on a compile-time
default. These are refactored so as the GUC table and its C declaration
use the same values. This makes everything consistent with other
places, backend_flush_after, bgwriter_flush_after, port,
checkpoint_flush_after doing so already, for example.
Extracted from a larger patch by Peter Smith. The spots updated in the
modules are from me.
Author: Peter Smith, Michael Paquier
Reviewed-by: Nathan Bossart, Tom Lane, Justin Pryzby
Discussion: https://postgr.es/m/CAHut+PtHE0XSfjjRQ6D4v7+dqzCw=d+1a64ujra4EX8aoc_Z+w@mail.gmail.com
When all of the query's DISTINCT pathkeys have been marked as redundant
due to EquivalenceClasses existing which contain constants, we can just
implement the DISTINCT operation on a query by just limiting the number of
returned rows to 1 instead of performing a Unique on all of the matching
(duplicate) rows.
This applies in cases such as:
SELECT DISTINCT col,col2 FROM tab WHERE col = 1 AND col2 = 10;
If there are any matching rows, then they must all be {1,10}. There's no
point in fetching all of those and running a Unique operator on them to
leave only a single row. Here we effectively just find the first row and
then stop. We are obviously unable to apply this optimization if either
the col = 1 or col2 = 10 were missing from the WHERE clause or if there
were any additional columns in the SELECT clause.
Such queries are probably not all that common, but detecting when we can
apply this optimization amounts to checking if the distinct_pathkeys are
NULL, which is very cheap indeed.
Nothing is done here to check if the query already has a LIMIT clause. If
it does then the plan may end up with 2 Limits nodes. There's no harm in
that and it's probably not worth the complexity to unify them into a
single Limit node.
Author: David Rowley
Reviewed-by: Richard Guo
Discussion: https://postgr.es/m/CAApHDvqS0j8RUWRUSgCAXxOqnYjHUXmKwspRj4GzVfOO25ByHA@mail.gmail.com
Discussion: https://postgr.es/m/MEYPR01MB7101CD5DA0A07C9DE2B74850A4239@MEYPR01MB7101.ausprd01.prod.outlook.com
Here we add a new 'copy' parameter to tuplesort_getdatum so that we can
instruct the function not to datumCopy() byref Datums before returning.
Similar to 91e9e89dc, this can provide significant performance
improvements in nodeSort when sorting by a single byref column and the
sort's targetlist contains only that column.
This allows us to re-enable Datum sorts for byref types which was disabled
in 3a5817695 due to a reported memory leak.
Additionally, here we slightly optimize DISTINCT aggregates so that we no
longer perform any datumCopy() when we find the current value not to be
distinct from the previous value. Previously the code would always take a
copy of the most recent Datum and pfree the previous value, even when the
values were the same. Testing shows a small but noticeable performance
increase when aggregate transitions are skipped due to the current
transition value being the same as the prior one.
Author: David Rowley
Discussion: https://postgr.es/m/CAApHDvqS6wC5U==k9Hd26E4EQXH3QR67-T4=Q1rQ36NGvjfVSg@mail.gmail.com
Discussion: https://postgr.es/m/CAApHDvqHonfe9G1cVaKeHbDx70R_zCrM3qP2AGXpGrieSKGnhA@mail.gmail.com
When we decide we need to make a derived clause equating a.x and
b.y, we already will re-use a previously-made clause "a.x = b.y".
But we might instead have "b.y = a.x", which is perfectly usable
because equivclass.c has never promised anything about the
operand order in clauses it builds. Saving construction of a
new RestrictInfo doesn't matter all that much in itself --- but
because we cache selectivity estimates and so on per-RestrictInfo,
there's a possibility of saving a fair amount of duplicative
effort downstream.
Hence, check for commutative matches as well as direct ones when
seeing if we have a pre-existing clause. This changes the visible
clause order in several regression test cases, but they're all
clearly-insignificant changes.
Checking for the reverse operand order is simple enough, but
if we wanted to check for operator OID match we'd need to call
get_commutator here, which is not so cheap. I concluded that
we don't really need the operator check anyway, so I just
removed it. It's unlikely that an opfamily contains more than
one applicable operator for a given pair of operand datatypes;
and if it does they had better give the same answers, so there
seems little need to insist that we use exactly the one
select_equality_operator chose.
Using the current core regression suite as a test case, I see
this change reducing the number of new join clauses built by
create_join_clause from 9673 to 5142 (out of 26652 calls).
So not quite 50% savings, but pretty close to it.
Discussion: https://postgr.es/m/78062.1666735746@sss.pgh.pa.us
This commit moves pg_pwritev_with_retry(), a convenience wrapper of
pg_writev() able to handle partial writes, to common/file_utils.c so
that the frontend code is able to use it. A first use-case targetted
for this routine is pg_basebackup and pg_receivewal, for the
zero-padding of a newly-initialized WAL segment. This is used currently
in the backend when the GUC wal_init_zero is enabled (default).
Author: Bharath Rupireddy
Reviewed-by: Nathan Bossart, Thomas Munro
Discussion: https://postgr.es/m/CALj2ACUq7nAb7=bJNbK3yYmp-SZhJcXFR_pLk8un6XgDzDF3OA@mail.gmail.com
These numbers are strictly-monotone identifiers assigned to each rule
of pg_hba_file_rules and each map of pg_ident_file_mappings when loading
the HBA and ident configuration files, indicating the order in which
they are checked at authentication time, until a match is found.
With only one file loaded currently, this is equivalent to the line
numbers assigned to the entries loaded if one wants to know their order,
but this becomes mandatory once the inclusion of external files is
added to the HBA and ident files to be able to know in which order the
rules and/or maps are applied at authentication. Note that NULL is used
when a HBA or ident entry cannot be parsed or validated, aka when an
error exists, contrary to the line number.
Bump catalog version.
Author: Julien Rouhaud
Discussion: https://postgr.es/m/20220223045959.35ipdsvbxcstrhya@jrouhaud
The intention behind 1b73d0b was to limit the use of TokenizedAuthLine,
but I have fat-fingered one location in parse_hba_line() when creating
the HbaLine, where this should use the local variable and not the value
coming from TokenizedAuthLine. This logic is the exactly the same, but
let's be clean about all that on consistency grounds.
Reported-by: Julien Rouhaud
Discussion: https://postgr.es/m/20221026032730.k3sib5krgm7l6njk@jrouhaud
This has the advantage to limit the presence of the GUC values
hba_file and ident_file to the code paths where these files are loaded,
easing the introduction of an upcoming feature aimed at adding inclusion
logic for files and directories in HBA and ident files.
Note that this needs the addition of the source file name to HbaLine, in
addition to the line number, which is something needed by the backend in
two places of auth.c (authentication failure details and auth_id log
when log_connections is enabled).
While on it, adjust a log generated on authentication failure to report
the name of the actual HBA file on which the connection attempt matched,
where the line number and the raw line written in the HBA file were
already included. This was previously hardcoded as pg_hba.conf, which
would be incorrect when a custom value is used at postmaster startup for
the GUC hba_file.
Extracted from a larger patch by the same author.
Author: Julien Rouhaud
Discussion: https://postgr.es/m/20220223045959.35ipdsvbxcstrhya@jrouhaud
The IsCTIDVar() tests in nodeTidscan.c and nodeTidrangescan.c
look buggy at first sight: they aren't checking that the varno
matches the table to be scanned. Actually they're safe because
any Var in a scan-level qual must be for the correct table ...
but if we're depending on that, it's pretty pointless to verify
varlevelsup. (Besides which, varlevelsup is *always* zero at
execution, since we've flattened the rangetable long since.)
Remove the useless varlevelsup check, and instead add some
commentary explaining why we don't need to check varno.
Noted while fooling with a planner change that causes the order
of "t1.ctid = t2.ctid" to change in some tidscan.sql tests;
I was briefly fooled into thinking there was a live bug here.
This adjusts a few things for GUCs related to logical replication,
replication slots and WAL senders, in the shape of incorrect comments
and values inconsistent with their initial default value.
Author: Peter Smith
Reviewed-by: Nathan Bossart, Tom Lane, Justin Pryzby
Discussion: https://postgr.es/m/CAHut+PtHE0XSfjjRQ6D4v7+dqzCw=d+1a64ujra4EX8aoc_Z+w@mail.gmail.com
As of this commit, any database or user entry beginning with a slash (/)
is considered as a regular expression. This is particularly useful for
users, as now there is no clean way to match pattern on multiple HBA
lines. For example, a user name mapping with a regular expression needs
first to match with a HBA line, and we would skip the follow-up HBA
entries if the ident regexp does *not* match with what has matched in
the HBA line.
pg_hba.conf is able to handle multiple databases and roles with a
comma-separated list of these, hence individual regular expressions that
include commas need to be double-quoted.
At authentication time, user and database names are now checked in the
following order:
- Arbitrary keywords (like "all", the ones beginning by '+' for
membership check), that we know will never have a regexp. A fancy case
is for physical WAL senders, we *have* to only match "replication" for
the database.
- Regular expression matching.
- Exact match.
The previous logic did the same, but without the regexp step.
We have discussed as well the possibility to support regexp pattern
matching for host names, but these happen to lead to tricky issues based
on what I understand, particularly with host entries that have CIDRs.
This commit relies heavily on the refactoring done in a903971 and
fc579e1, so as the amount of code required to compile and execute
regular expressions is now minimal. When parsing pg_hba.conf, all the
computed regexps needs to explicitely free()'d, same as pg_ident.conf.
Documentation and TAP tests are added to cover this feature, including
cases where the regexps use commas (for clarity in the docs, coverage
for the parsing logic in the tests).
Note that this introduces a breakage with older versions, where a
database or user name beginning with a slash are treated as something to
check for an equal match. Per discussion, we have discarded this as
being much of an issue in practice as it would require a cluster to
have database and/or role names that begin with a slash, as well as HBA
entries using these. Hence, the consistency gained with regexps in
pg_ident.conf is more appealing in the long term.
**This compatibility change should be mentioned in the release notes.**
Author: Bertrand Drouvot
Reviewed-by: Jacob Champion, Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/fff0d7c1-8ad4-76a1-9db3-0ab6ec338bf7@amazon.com
Since pg_backup_start() and pg_backup_stop() exist, the tablespace map
data and the backup state data (backup_label string until 7d70809) have
been allocated in the TopMemoryContext. This approach would cause
memory leaks in the session calling these functions if failures happen
before pg_backup_stop() ends, leaking more memory on repeated failures.
Both things need little memory so that would not be really noticeable
for most users, except perhaps connection poolers with long-lived
connections able to trigger backup failures with these functions.
This commit improves the logic in this area by not allocating anymore
the backup-related data that needs to travel across the SQL-callable
backup functions in TopMemoryContext, by using instead a dedicated
memory context child of TopMemoryContext. The memory context is created
in pg_backup_start() and deleted when finishing pg_backup_stop(). In
the event of an in-flight failure, this memory context gets reset in the
follow-up pg_backup_start() call, so as we are sure that only one run
worth of data is leaked at any time. Some cleanup was already done for
the backup data on a follow-up call of pg_backup_start(), but using a
memory context makes the whole simpler.
BASE_BACKUP commands are executed in isolation, relying on the memory
context created for replication commands, hence these do not need such
an extra logic.
Author: Bharath Rupireddy
Reviewed-by: Robert Haas, Alvaro Herrera, Cary Huang, Michael Paquier
Discussion: https://postgr.es/m/CALj2ACXqvfKF2B0beQ=aJMdWnpNohmBPsRg=EDQj_6y1t2O8mQ@mail.gmail.com
It happens that the parts of hba.conf that are planned to be extended
to support regular expressions would finish by using the same error
message as the one used currently for pg_ident.conf when a regular
expression cannot be compiled, as long as the routine centralizing the
logic, regcomp_auth_token(), knows from which file the regexp comes from
and its line location in the so-said file.
This change makes the follow-up patches slightly simpler, and the logic
remains the same. I suspect that this makes the proposal to add support
for file inclusions in pg_ident.conf and pg_hba.conf slightly simpler,
as well.
Extracted from a larger patch by the same author. This is similar to
the refactoring done in fc579e1.
Author: Bertrand Drouvot
Discussion: https://postgr.es/m/fff0d7c1-8ad4-76a1-9db3-0ab6ec338bf7@amazon.com
This makes the choice of result scale of numeric power() for integer
exponents consistent with the choice for non-integer exponents, and
with the result scale of other numeric functions. Specifically, the
result scale will be at least as large as the scale of either input,
and sufficient to ensure that the result has at least 16 significant
digits.
Formerly, the result scale was based only on the scale of the first
input, without taking into account the weight of the result. For
results with negative weight, that could lead to results with very few
or even no non-zero significant digits (e.g., 10.0 ^ (-18) produced
0.0000000000000000).
Fix this by moving responsibility for the choice of result scale into
power_var_int(), which already has code to estimate the result weight.
Per report by Adrian Klaver and suggested fix by Tom Lane.
No back-patch -- arguably this is a bug fix, but one which is easy to
work around, so it doesn't seem worth the risk of changing query
results in stable branches.
Discussion: https://postgr.es/m/12a40226-70ac-3a3b-3d3a-fdaf9e32d312%40aklaver.com
When the logical decoding restarts from NEW_CID, since there is no
association between the top transaction and its subtransaction, both are
created as top transactions and have the same LSN. This caused the
assertion failure in AssertTXNLsnOrder().
This patch skips the assertion check until we reach the LSN at which we
start decoding the contents of the transaction, specifically
start_decoding_at LSN in SnapBuild. This is okay because we don't
guarantee to make the association between top transaction and
subtransaction until we try to decode the actual contents of transaction.
The ordering of the records prior to the start_decoding_at LSN should have
been checked before the restart.
The other assertion failure is due to the reason that we forgot to track
that we have considered top-level transaction id in the list of catalog
changing transactions that were committed when one of its subtransactions
is marked as containing catalog change.
Reported-by: Tomas Vondra, Osumi Takamichi
Author: Masahiko Sawada, Kuroda Hayato
Reviewed-by: Amit Kapila, Dilip Kumar, Kuroda Hayato, Kyotaro Horiguchi, Masahiko Sawada
Backpatch-through: 10
Discussion: https://postgr.es/m/a89b46b6-0239-2fd5-71a9-b19b1f7a7145%40enterprisedb.com
Discussion: https://postgr.es/m/TYCPR01MB83733C6CEAE47D0280814D5AED7A9%40TYCPR01MB8373.jpnprd01.prod.outlook.com
We had two copies of almost identical logic to revert shared memory
state when a running backup aborts; we can remove
pg_backup_start_callback if we adapt do_pg_abort_backup so that it can
be used for this purpose too.
However, in order for this to work, we have to repurpose the flag passed
to do_pg_abort_backup. It used to indicate whether to throw a warning
(and the only caller always passed true). It now indicates whether the
callback is being called at start time (in which case the session backup
state is known not to have been set to RUNNING yet, so action is always
taken) or shmem time (in which case action is only taken if the session
backup state is RUNNING). Thus the meaning of the flag is no longer
superfluous, but it's actually quite critical to get right. I (Álvaro)
chose to change the polarity and the code flow re. the flag from what
Bharath submitted, for coding clarity.
Co-authored-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Discussion: https://www.postgresql.org/message-id/20221013111330.564fk5tkwe3ha77l%40alvherre.pgsql
As currently designed, with a callback registered in a ERROR_CLEANUP
block, the shutdown callback would get called twice when updating
archive_library on SIGHUP, which is something that we want to avoid to
ease the life of extension writers.
Anyway, an ERROR in the archiver process is treated as a FATAL, stopping
it immediately, hence there is no need for a ERROR_CLEANUP block.
Instead of that, the shutdown callback is not called upon
before_shmem_exit(), giving to the modules the opportunity to do any
cleanup actions before the server shuts down its subsystems.
While on it, this commit adds some testing coverage for the shutdown
callback. Neither shell_archive nor basic_archive have been using it,
and one is added to shell_archive, whose trigger is checked in a TAP
test through a shutdown sequence.
Author: Nathan Bossart, Bharath Rupireddy
Reviewed-by: Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/20221015221328.GB1821022@nathanxps13
Backpatch-through: 15
AuthToken gains a regular expression, and IdentLine is changed so as it
uses an AuthToken rather than tracking separately the ident user string
used for the regex compilation and its generated regex_t. In the case
of pg_ident.conf, a set of AuthTokens is built in the pre-parsing phase
of the file, and an extra regular expression is compiled when building
the list of IdentLines, after checking the sanity of the fields in a
pre-parsed entry.
The logic in charge of computing and executing regular expressions is
now done in a new set of routines called respectively
regcomp_auth_token() and regexec_auth_token() that are wrappers around
pg_regcomp() and pg_regexec(), working on AuthTokens. While on it, this
patch adds a routine able to free an AuthToken, free_auth_token(), to
simplify a bit the logic around the requirement of using a specific free
routine for computed regular expressions. Note that there are no
functional or behavior changes introduced by this commit.
The goal of this patch is to ease the use of regular expressions with
more items of pg_hba.conf (user list, database list, potentially
hostnames) where AuthTokens are used extensively. This will be tackled
later in a separate patch.
Author: Bertrand Drouvot, Michael Paquier
Discussion: https://postgr.es/m/fff0d7c1-8ad4-76a1-9db3-0ab6ec338bf7@amazon.com
Preprocessing of the HAVING clause will reduce havingQual to NIL
if the clause is constant-TRUE. This is one case where that
convention is rather unfortunate, because "HAVING TRUE" is not at all
the same as not having any HAVING clause at all. (Per the SQL spec,
it still forces the query to be grouped.) The planner deals with this
by having a boolean hasHavingQual that records whether havingQual was
originally nonempty; places that just want to check whether HAVING
was specified are supposed to consult that.
I found three places that got that wrong. Fortunately, these could
only affect cost estimates not correctness. It'd be hard even
to demonstrate the errors; for example, the one in allpaths.c would
only matter in a query that has HAVING TRUE but no GROUP BY and no
aggregates, which would require a completely variable-free SELECT
list, making the case probably of only academic interest. Hence,
while these are worth fixing before someone copies the incorrect
coding somewhere more critical, they don't seem worth back-patching.
I didn't bother trying to devise regression tests, either.
Discussion: https://postgr.es/m/2503888.1666042643@sss.pgh.pa.us
These routines have been renamed in a19e5ce. There is no need to keep
the compatibility declarations on HEAD, as once an extension moves to
the new routine name when compiling with v16~ the code would work the
same way when recompiled on v15. No backpatch to v15 for this one,
because ABI compatibility has to be maintained there.
Discussion: https://postgr.es/m/20221013194820.ciktb2sbbpw7cljm@awork3.anarazel.de
Per discussion, the existing routine name able to initialize a SRF
function with materialize mode is unpopular, so rename it. Equally, the
flags of this function are renamed, as of:
- SRF_SINGLE_USE_EXPECTED -> MAT_SRF_USE_EXPECTED_DESC
- SRF_SINGLE_BLESS -> MAT_SRF_BLESS
The previous function and flags introduced in 9e98583 are kept around
for compatibility purposes, so as any extension code already compiled
with v15 continues to work as-is. The declarations introduced here for
compatibility will be removed from HEAD in a follow-up commit.
The new names have been suggested by Andres Freund and Melanie
Plageman.
Discussion: https://postgr.es/m/20221013194820.ciktb2sbbpw7cljm@awork3.anarazel.de
Backpatch-through: 15
When creating a cast that uses a conversion function, we've
historically allowed the input and result types to be
binary-compatible with the function's input and result types,
rather than necessarily being identical. This means that the new
cast is logically dependent on the binary-compatible cast or casts
that it references: if those are defined by pg_cast entries, and you
try to restore the new cast without having defined them, it'll fail.
Hence, we should make pg_depend entries to record these dependencies
so that pg_dump knows that there is an ordering requirement.
This is not the only place where we allow such shortcuts; aggregate
functions for example are similarly lax, and in principle should gain
similar dependencies. However, for now it seems sufficient to fix
the cast-versus-cast case, as pg_dump's other ordering heuristics
should keep it out of trouble for other object types.
Per report from David Turoň; thanks also to Robert Haas for
preliminary investigation. I considered back-patching, but
seeing that this issue has existed for many years without
previous reports, it's not clear it's worth the trouble.
Moreover, back-patching wouldn't be enough to ensure that the
new pg_depend entries exist in existing databases anyway.
Discussion: https://postgr.es/m/OF0A160F3E.578B15D1-ONC12588DA.003E4857-C12588DA.0045A428@notes.linuxbox.cz