If recordDependencyOnCurrentExtension is invoked on a pre-existing,
free-standing object during an extension update script, that object
will become owned by the extension. In our current code this is
possible in three cases:
* Replacing a "shell" type or operator.
* CREATE OR REPLACE overwriting an existing object.
* ALTER TYPE SET, ALTER DOMAIN SET, and ALTER OPERATOR SET.
The first of these cases is intentional behavior, as noted by the
existing comments for GenerateTypeDependencies. It seems like
appropriate behavior for CREATE OR REPLACE too; at least, the obvious
alternatives are not better. However, the fact that it happens during
ALTER is an artifact of trying to share code (GenerateTypeDependencies
and makeOperatorDependencies) between the CREATE and ALTER cases.
Since an extension script would be unlikely to ALTER an object that
didn't already belong to the extension, this behavior is not very
troubling for the direct target object ... but ALTER TYPE SET will
recurse to dependent domains, and it is very uncool for those to
become owned by the extension if they were not already.
Let's fix this by redefining the ALTER cases to never change extension
membership, full stop. We could minimize the behavioral change by
only changing the behavior when ALTER TYPE SET is recursing to a
domain, but that would complicate the code and it does not seem like
a better definition.
Per bug #17144 from Alex Kozhemyakin. Back-patch to v13 where ALTER
TYPE SET was added. (The other cases are older, but since they only
affect the directly-named object, there's not enough of a problem to
justify changing the behavior further back.)
Discussion: https://postgr.es/m/17144-e67d7a8f049de9af@postgresql.org
This reverts the following commits:
1b5617eb84 Describe (auto-)analyze behavior for partitioned tables
0e69f705cc Set pg_class.reltuples for partitioned tables
41badeaba8 Document ANALYZE storage parameters for partitioned tables
0827e8af70 autovacuum: handle analyze for partitioned tables
There are efficiency issues in this code when handling databases with
large numbers of partitions, and it doesn't look like there isn't any
trivial way to handle those. There are some other issues as well. It's
now too late in the cycle for nontrivial fixes, so we'll have to let
Postgres 14 users continue to manually deal with ANALYZE their
partitioned tables, and hopefully we can fix the issues for Postgres 15.
I kept [most of] be280cdad2 ("Don't reset relhasindex for partitioned
tables on ANALYZE") because while we added it due to 0827e8af70, it is
a good bugfix in its own right, since it affects manual analyze as well
as autovacuum-induced analyze, and there's no reason to revert it.
I retained the addition of relkind 'p' to tables included by
pg_stat_user_tables, because reverting that would require a catversion
bump.
Also, in pg14 only, I keep a struct member that was added to
PgStat_TabStatEntry to avoid breaking compatibility with existing stat
files.
Backpatch to 14.
Discussion: https://postgr.es/m/20210722205458.f2bug3z6qzxzpx2s@alap3.anarazel.de
During a VACUUM or CLUSTER command, the initial output emits a
fully qualified relation path with namespace. The post-action
errmsg only emitted the relation name however, which may lead
to hard to parse output when using multiple jobs with vacuumdb
as the output from different jobs may be interleaved. Include
the full path in the post-action errmsg to be consistent with
the initial errmsg.
Author: Mike Fiedler <miketheman@gmail.com>
Reviewed-by: Corey Huinker <corey.huinker@gmail.com>
Discussion: https://postgr.es/m/CAMerE0oz+8G-aORZL_BJcPxnBqewZAvND4bSUysjz+r-oT1BxQ@mail.gmail.com
ALTER TABLE .. SET {LOGGED,UNLOGGED,ACCESS METHOD} would never do a
table-level object access hook, which was inconsistent with SET
TABLESPACE. Note that contrary to SET TABLESPACE, the no-op case is
left off for those commands as this requires tracking if commands have
been called, but they may not execute a physical rewrite. Another thing
worth noting is that the physical file swap at the end of a rewrite
does a couple of access calls for internal objects created for the swap
operation (internal objects are for example skipped by the tests of
sepgsql), but this does not trigger the hook for the table on which the
operation is done.
f41872d, that added support for SET LOGGED/UNLOGGED in ALTER TABLE,
visibly forgot to consider that.
Based on what I checked, two regression tests of sepgsql in ddl.sql are
going to log more information with this test, something that buildfarm
member rhinoceros will tell soon enough. I am not completely sure of
their format though, so these are not refreshed yet.
This is arguably a bug, but no backpatch is done as this could cause a
behavior change for anybody using object access hooks.
Reported-by: Jeff Davis
Discussion: https://postgr.es/m/YQJKV29/1a60uG68@paquier.xyz
This saves a few lines of code. Also add a comment to mention why we use
ExplainPropertyInteger instead of ExplainPropertyUInteger given that
queryid is a uint64 type.
Author: David Rowley
Reviewed-by: Julien Rouhaud
Discussion: https://postgr.es/m/CAApHDvqhSLYpSU_EqUdN39w9Uvb8ogmHV7_3YhJ0S3aScGBjsg@mail.gmail.com
Backpatch-through: 14, where this code was originally added
Rather than trying to pick table aliases that won't conflict with
any possible user-defined matview column name, adjust the queries'
syntax so that the aliases are only used in places where they can't be
mistaken for column names. Mostly this consists of writing "alias.*"
not just "alias", which adds clarity for humans as well as machines.
We do have the issue that "SELECT alias.*" acts differently from
"SELECT alias", but we can use the same hack ruleutils.c uses for
whole-row variables in SELECT lists: write "alias.*::compositetype".
We might as well revert to the original aliases after doing this;
they're a bit easier to read.
Like 75d66d10e, back-patch to all supported branches.
Discussion: https://postgr.es/m/2488325.1628261320@sss.pgh.pa.us
Commit a8fd13cab0 added support for prepared transactions to built-in
logical replication via a new option "two_phase" for a subscription. The
"two_phase" option was not allowed with the existing streaming option.
This commit permits the combination of "streaming" and "two_phase"
subscription options. It extends the pgoutput plugin and the subscriber
side code to add the prepare API for streaming transactions which will
apply the changes accumulated in the spool-file at prepare time.
Author: Peter Smith and Ajin Cherian
Reviewed-by: Vignesh C, Amit Kapila, Greg Nancarrow
Tested-By: Haiying Tang
Discussion: https://postgr.es/m/02DA5F5E-CECE-4D9C-8B4B-418077E2C010@postgrespro.ru
Discussion: https://postgr.es/m/CAMGcDxeqEpWj3fTXwqhSwBdXd2RS9jzwWscO-XbeCfso6ts3+Q@mail.gmail.com
They are used in code that runs both during normal operation and during
WAL replay, and needs to behave differently during replay. Move them to
xlogutils.c, because that's where we have other helper functions used by
redo routines.
Reviewed-by: Andres Freund
Discussion: https://www.postgresql.org/message-id/b3b71061-4919-e882-4857-27e370ab134a%40iki.fi
The logic used to support a change of access method for a table is
similar to changes for tablespace or relation persistence, requiring a
table rewrite with an exclusive lock of the relation changed. Table
rewrites done in ALTER TABLE already go through the table AM layer when
scanning tuples from the old relation and inserting them into the new
one, making this implementation straight-forward.
Note that partitioned tables are not supported as these have no access
methods defined.
Author: Justin Pryzby, Jeff Davis
Reviewed-by: Michael Paquier, Vignesh C
Discussion: https://postgr.es/m/20210228222530.GD20769@telsasoft.com
pg_event_trigger_ddl_commands used "pg_temp" to refer to any
temp schema, not only that of the current backend. This seems
like overreach. It's somewhat unlikely that DDL commands would
refer to temp objects of other sessions to begin with, but if they
do, "pg_temp" would be a most misleading way to display the action.
While this seems like a bug, it's not quite out of the realm of
possibility that somebody out there is expecting the current
behavior. Hence, fix in HEAD, but don't back-patch.
Discussion: https://postgr.es/m/CAAJ_b97W=QaGmag9AhWNbmx3uEYsNkXWL+OVW1_E1D3BtgWvtw@mail.gmail.com
This patch causes EXPLAIN output to refer to objects that are in
the current session's temp schema with the "pg_temp" schema alias
rather than that schema's actual name. This is useful for our own
testing purposes since it will stabilize EXPLAIN VERBOSE output
for such cases, allowing us to use that in regression tests.
It should be less confusing for end users too.
Since ruleutils.c needs to change behavior for this, the change
also leaks into a few other users of ruleutils.c, for example
pg_get_viewdef(). AFAICS that won't cause any problems.
We did find that aggressively trying to change this behavior
across-the-board would cause issues, but as long as "pg_temp"
only appears within generated SQL text, I think it'll be fine.
Along the way, make get_namespace_name_or_temp conform to the
same API as get_namespace_name, ie that it returns a palloc'd
string or NULL. The current behavior hasn't caused any bugs
since no callers attempt to pfree the result, but if it gets
more widespread usage that could become a problem.
Amul Sul, reviewed and extended by me
Discussion: https://postgr.es/m/CAAJ_b97W=QaGmag9AhWNbmx3uEYsNkXWL+OVW1_E1D3BtgWvtw@mail.gmail.com
Coverity complained that my commit 80ba4bb383 added a dubious coding
for a consistency check that there isn't more than one row for a certain
tgrelid/tgparentid combination. But we don't check for that explicitly
anywhere else, and if we were to do it, it should be a full
shouldn't-happen elog not just an assert. It doesn't seem that this is
very important anyway, so remove it.
Discussion: https://postgr.es/m/1337562.1627224583@sss.pgh.pa.us
Renaming triggers on partitioned tables had two problems: first,
it did not recurse to renaming the triggers on the partitions; and
second, it failed to prohibit renaming clone triggers. Having triggers
with different names in partitions is pointless, and furthermore pg_dump
would not preserve names for partitions anyway.
Not backpatched -- making the ALTER TRIGGER throw an error in stable
versions might cause problems for existing scripts.
Co-authored-by: Arne Roland <A.Roland@index.de>
Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Zhihong Yu <zyu@yugabyte.com>
Discussion: https://postgr.es/m/d0fd7040c2fb4de1a111b9d9ccc456b8@index.de
We don't allow to create replication slot_name as an empty string ('') via
SQL API pg_create_logical_replication_slot() but it is allowed to be set
via Alter Subscription command. This will lead to apply worker repeatedly
keep trying to stream data via slot_name '' and the user is not allowed to
create the slot with that name.
Author: Japin Li
Reviewed-By: Ranier Vilela, Amit Kapila
Backpatch-through: 10, where it was introduced
Discussion: https://postgr.es/m/MEYP282MB1669CBD98E721C77CA696499B61A9@MEYP282MB1669.AUSP282.PROD.OUTLOOK.COM
Check for conflicting or redundant options, as we do for most other
commands. Specifying any option more than once is at best redundant,
and quite likely indicates a bug in the user's code.
While at it, improve the error for conflicting locale options by
adding detail text (the same as for CREATE DATABASE).
Bharath Rupireddy, reviewed by Vignesh C. Some additional hacking by
me.
Discussion: https://postgr.es/m/CALj2ACWtL6fTLdyF4R_YkPtf1YEDb6FUoD5DGAki3rpD+sWqiA@mail.gmail.com
When triggers are cloned from partitioned tables to their partitions,
the 'tgenabled' flag (origin/replica/always/disable) was not propagated.
Make it so that the flag on the trigger on partition is initially set to
the same value as on the partitioned table.
Add a test case to verify the behavior.
Backpatch to 11, where this appeared in commit 86f575948c.
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reported-by: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20200930223450.GA14848@telsasoft.com
As of v14, pg_depend contains almost 7000 "pin" entries recording
the OIDs of built-in objects. This is a fair amount of bloat for
every database, and it adds time to pg_depend lookups as well as
initdb. We can get rid of all of those entries in favor of an OID
range check, i.e. "OIDs below FirstUnpinnedObjectId are pinned".
(template1 and the public schema are exceptions. Those exceptions
are now wired into IsPinnedObject() instead of initdb's code for
filling pg_depend, but it's the same amount of cruft either way.)
The contents of pg_shdepend are modified likewise.
Discussion: https://postgr.es/m/3737988.1618451008@sss.pgh.pa.us
When reporting "conflicting or redundant options" errors, try to
ensure that errposition() is used, to help the user identify the
offending option.
Formerly, errposition() was invoked in less than 60% of cases. This
patch raises that to over 90%, but there remain a few places where the
ParseState is not readily available. Using errdetail() might improve
the error in such cases, but that is left as a task for the future.
Additionally, since this error is thrown from over 100 places in the
codebase, introduce a dedicated function to throw it, reducing code
duplication.
Extracted from a slightly larger patch by Vignesh C. Reviewed by
Bharath Rupireddy, Alvaro Herrera, Dilip Kumar, Hou Zhijie, Peter
Smith, Daniel Gustafsson, Julien Rouhaud and me.
Discussion: https://postgr.es/m/CALDaNm33FFSS5tVyvmkoK2cCMuDVxcui=gFrjti9ROfynqSAGA@mail.gmail.com
Commit 0563a3a8b changed how partition constraints were generated such
that this function no longer computes the mapping of parent attnos to
child attnos.
This is an external function that extensions could use, so this is
potentially a breaking change. No external callers are known, however,
and this will make it simpler to write such callers in the future.
Author: Hou Zhijie
Reviewed-by: David Rowley, Michael Paquier, Soumyadeep Chakraborty
Discussion: https://www.postgresql.org/message-id/flat/OS0PR01MB5716A75A45BE46101A1B489894379@OS0PR01MB5716.jpnprd01.prod.outlook.com
To add support for streaming transactions at prepare time into the
built-in logical replication, we need to do the following things:
* Modify the output plugin (pgoutput) to implement the new two-phase API
callbacks, by leveraging the extended replication protocol.
* Modify the replication apply worker, to properly handle two-phase
transactions by replaying them on prepare.
* Add a new SUBSCRIPTION option "two_phase" to allow users to enable
two-phase transactions. We enable the two_phase once the initial data sync
is over.
We however must explicitly disable replication of two-phase transactions
during replication slot creation, even if the plugin supports it. We
don't need to replicate the changes accumulated during this phase,
and moreover, we don't have a replication connection open so we don't know
where to send the data anyway.
The streaming option is not allowed with this new two_phase option. This
can be done as a separate patch.
We don't allow to toggle two_phase option of a subscription because it can
lead to an inconsistent replica. For the same reason, we don't allow to
refresh the publication once the two_phase is enabled for a subscription
unless copy_data option is false.
Author: Peter Smith, Ajin Cherian and Amit Kapila based on previous work by Nikhil Sontakke and Stas Kelvich
Reviewed-by: Amit Kapila, Sawada Masahiko, Vignesh C, Dilip Kumar, Takamichi Osumi, Greg Nancarrow
Tested-By: Haiying Tang
Discussion: https://postgr.es/m/02DA5F5E-CECE-4D9C-8B4B-418077E2C010@postgrespro.ru
Discussion: https://postgr.es/m/CAA4eK1+opiV4aFTmWWUF9h_32=HfPOW9vZASHarT0UA5oBrtGw@mail.gmail.com
"Result Cache" was never a great name for this node, but nobody managed
to come up with another name that anyone liked enough. That was until
David Johnston mentioned "Node Memoization", which Tom Lane revised to
just "Memoize". People seem to like "Memoize", so let's do the rename.
Reviewed-by: Justin Pryzby
Discussion: https://postgr.es/m/20210708165145.GG1176@momjian.us
Backpatch-through: 14, where Result Cache was introduced
There's no point in checking if an INT8 sequence has a seqmin and seqmax
value is outside the range of the minimum and maximum values for an int64
type. These both use the same underlying types so an INT8 certainly
cannot be outside the minimum and maximum values supported by int64.
This code is fairly harmless and it seems likely that most compilers
would optimize it out anyway, never-the-less, let's remove it replacing
it with a small comment to mention why the check is not needed.
Author: Greg Nancarrow, with the comment revised by David Rowley
Discussion: https://postgr.es/m/CAJcOf-c9KBUZ8ow_6e%3DWSfbbEyTKfqV%3DVwoFuODQVYMySHtusw%40mail.gmail.com
The idea behind this patch is to design out bugs like the one fixed
by commit 9d523119f. Previously, once one did RelationOpenSmgr(rel),
it was considered okay to access rel->rd_smgr directly for some
not-very-clear interval. But since that pointer will be cleared by
relcache flushes, we had bugs arising from overreliance on a previous
RelationOpenSmgr call still being effective.
Now, very little code except that in rel.h and relcache.c should ever
touch the rd_smgr field directly. The normal coding rule is to use
RelationGetSmgr(rel) and not expect the result to be valid for longer
than one smgr function call. There are a couple of places where using
the function every single time seemed like overkill, but they are now
annotated with large warning comments.
Amul Sul, after an idea of mine.
Discussion: https://postgr.es/m/CANiYTQsU7yMFpQYnv=BrcRVqK_3U3mtAzAsJCaqtzsDHfsUbdQ@mail.gmail.com
Although we were careful to lock the object being added or dropped,
we failed to get any sort of lock on the extension itself. This
allowed the ALTER to proceed in parallel with a DROP EXTENSION,
which is problematic for a couple of reasons. If both commands
succeeded we'd be left with a dangling link in pg_depend, which
would cause problems later. Also, if the ALTER failed for some
reason, it might try to print the extension's name, and that could
result in a crash or (in older branches) a silly error message
complaining about extension "(null)".
Per bug #17098 from Alexander Lakhin. Back-patch to all
supported branches.
Discussion: https://postgr.es/m/17098-b960f3616c861f83@postgresql.org
Most error messages about a relkind that was not supported or
appropriate for the command was of the pattern
"relation \"%s\" is not a table, foreign table, or materialized view"
This style can become verbose and tedious to maintain. Moreover, it's
not very helpful: If I'm trying to create a comment on a TOAST table,
which is not supported, then the information that I could have created
a comment on a materialized view is pointless.
Instead, write the primary error message shorter and saying more
directly that what was attempted is not possible. Then, in the detail
message, explain that the operation is not supported for the relkind
the object was. To simplify that, add a new function
errdetail_relkind_not_supported() that does this.
In passing, make use of RELKIND_HAS_STORAGE() where appropriate,
instead of listing out the relkinds individually.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://www.postgresql.org/message-id/flat/dc35a398-37d0-75ce-07ea-1dd71d98f8ec@2ndquadrant.com
Instead of using multiple parameters in parse_subscription_options
function signature, use the struct SubOpts that encapsulate all the
subscription options and their values. It will be useful for future work
where we need to add other options in the subscription. Also, use bitmaps
to pass the supported and retrieve the specified options much like the way
it is done in the commit a3dc926009.
Author: Bharath Rupireddy
Reviewed-By: Peter Smith, Amit Kapila, Alvaro Herrera
Discussion: https://postgr.es/m/CALj2ACXtoQczfNsDQWobypVvHbX2DtgEHn8DawS0eGFwuo72kw@mail.gmail.com
copy_data is not a supported option with this sub-command of ALTER
SUBSCRIPTION, which would not make a variable related to it initialized
after parsing the option set in DefElems. A refresh could then refer to
it.
Author: Ranier Vilela
Reviewed-by: Peter Smith
Discussion: https://postgr.es/m/CAEudQAp5P8nr=ze2Gv=BMj=DJFZnrvendZCZcC-fos3QiDe2sg@mail.gmail.com
It's not really necessary for this function to open or lock the
relation associated with the pg_policy entry it's modifying. The
error checks it's making on the rel are if anything counterproductive
(e.g., if we don't want to allow installation of policies on system
catalogs, here is not the place to prevent that). In particular, it
seems just wrong to insist on an ownership check. That has the net
effect of forcing people to use superuser for DROP OWNED BY, which
surely is not an effect we want. Also there is no point in rebuilding
the dependencies of the policy expressions, which aren't being
changed. Lastly, locking the table also seems counterproductive; it's
not helping to prevent race conditions, since we failed to re-read the
pg_policy row after acquiring the lock. That means that concurrent
DDL would likely result in "tuple concurrently updated/deleted"
errors; which is the same behavior this code will produce, with less
overhead.
Per discussion of bug #17062. Back-patch to all supported versions,
as the failure cases this eliminates seem just as undesirable in 9.6
as in HEAD.
Discussion: https://postgr.es/m/1573181.1624220108@sss.pgh.pa.us
Generalize the INDEX_CLEANUP VACUUM parameter (and the corresponding
reloption): make it into a ternary style boolean parameter. It now
exposes a third option, "auto". The "auto" option (which is now the
default) enables the "bypass index vacuuming" optimization added by
commit 1e55e7d1.
"VACUUM (INDEX_CLEANUP TRUE)" is redefined to once again make VACUUM
simply do any required index vacuuming, regardless of how few dead
tuples are encountered during the first scan of the target heap relation
(unless there are exactly zero). This gives users a way of opting out
of the "bypass index vacuuming" optimization, if for whatever reason
that proves necessary. It is also expected to be used by PostgreSQL
developers as a testing option from time to time.
"VACUUM (INDEX_CLEANUP FALSE)" does the same thing as it always has: it
forcibly disables both index vacuuming and index cleanup. It's not
expected to be used much in PostgreSQL 14. The failsafe mechanism added
by commit 1e55e7d1 addresses the same problem in a simpler way.
INDEX_CLEANUP can now be thought of as a testing and compatibility
option.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-By: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/CAH2-WznrBoCST4_Gxh_G9hA8NzGUbeBGnOUC8FcXcrhqsv6OHQ@mail.gmail.com
Ordinarily, a pg_policy.polroles array wouldn't list the same role
more than once; but CREATE POLICY does not prevent that. If we
perform DROP OWNED BY on a role that is listed more than once,
RemoveRoleFromObjectPolicy either suffered an assertion failure
or encountered a tuple-updated-by-self error. Rewrite it to cope
correctly with duplicate entries, and add a CommandCounterIncrement
call to prevent the other problem.
Per discussion, there's other cleanup that ought to happen here,
but this seems like the minimum essential fix.
Per bug #17062 from Alexander Lakhin. It's been broken all along,
so back-patch to all supported branches.
Discussion: https://postgr.es/m/17062-11f471ae3199ca23@postgresql.org
In the "simple Query" code path, it's fine for parse analysis or
execution of a utility statement to scribble on the statement's node
tree, since that'll just be thrown away afterwards. However it's
not fine if the node tree is in the plan cache, as then it'd be
corrupted for subsequent executions. Up to now we've dealt with
that by having individual utility-statement functions apply
copyObject() if they were going to modify the tree. But that's
prone to errors of omission. Bug #17053 from Charles Samborski
shows that CREATE/ALTER DOMAIN didn't get this memo, and can
crash if executed repeatedly from plan cache.
In the back branches, we'll just apply a narrow band-aid for that,
but in HEAD it seems prudent to have a more principled fix that
will close off the possibility of other similar bugs in future.
Hence, let's hoist the responsibility for doing copyObject up into
ProcessUtility from its children, thus ensuring that it happens for
all utility statement types.
Also, modify ProcessUtility's API so that its callers can tell it
whether a copy step is necessary. It turns out that in all cases,
the immediate caller knows whether the node tree is transient, so
this doesn't involve a huge amount of code thrashing. In this way,
while we lose a little bit in the execute-from-cache code path due
to sometimes copying node trees that wouldn't be mutated anyway,
we gain something in the simple-Query code path by not copying
throwaway node trees. Statements that are complex enough to be
expensive to copy are almost certainly ones that would have to be
copied anyway, so the loss in the cache code path shouldn't be much.
(Note that this whole problem applies only to utility statements.
Optimizable statements don't have the issue because we long ago made
the executor treat Plan trees as read-only. Perhaps someday we will
make utility statement execution act likewise, but I'm not holding
my breath.)
Discussion: https://postgr.es/m/931771.1623893989@sss.pgh.pa.us
Discussion: https://postgr.es/m/17053-3ca3f501bbc212b4@postgresql.org
When stuffing a plan from the plancache into a Portal, one is
not supposed to risk throwing an error between GetCachedPlan and
PortalDefineQuery; if that happens, the plan refcount incremented
by GetCachedPlan will be leaked. I managed to break this rule
while refactoring code in 9dbf2b7d7. There is no visible
consequence other than some memory leakage, and since nobody is
very likely to trigger the relevant error conditions many times
in a row, it's not surprising we haven't noticed. Nonetheless,
it's a bug, so rearrange the order of operations to remove the
hazard.
Noted on the way to looking for a better fix for bug #17053.
This mistake is pretty old, so back-patch to all supported
branches.
I started out with the goal of reporting ERRCODE_CONNECTION_FAILURE
when walrcv_connect() fails, but as I looked around I realized that
whoever wrote this code was of the opinion that errcodes are purely
optional. That's not my understanding of our project policy. Hence,
make sure that an errcode is provided in each ereport that (a) is
ERROR or higher level and (b) isn't arguably an internal logic error.
Also fix some very dubious existing errcode assignments.
While this is not per policy, it's also largely cosmetic, since few
of these cases could get reported to applications. So I don't
feel a need to back-patch.
Discussion: https://postgr.es/m/2189704.1623512522@sss.pgh.pa.us
It has been spotted that multiranges lack of ability to decompose them into
individual ranges. Subscription and proper expanded object representation
require substantial work, and it's too late for v14. This commit
provides the implementation of unnest(multirange) and cast multirange as
an array of ranges, which is quite trivial.
unnest(multirange) is defined as a polymorphic procedure. The catalog
description of the cast underlying procedure is duplicated for each multirange
type because we don't have anyrangearray polymorphic type to use here.
Catversion is bumped.
Reported-by: Jonathan S. Katz
Discussion: https://postgr.es/m/flat/60258efe-bd7e-4886-82e1-196e0cac5433%40postgresql.org
Author: Alexander Korotkov
Reviewed-by: Justin Pryzby, Jonathan S. Katz, Zhihong Yu
An object found as dropped when digging into the list of objects
returned by pg_event_trigger_ddl_commands() could cause a cache lookup
error, as the calls grabbing for the object address and the type name
would fail if the object was missing.
Those lookup errors could be seen with combinations of ALTER TABLE
sub-commands involving identity columns. The lookup logic is changed in
this code path to get a behavior similar to any other SQL-callable
function by ignoring objects that are not found, taking advantage of
2a10fdc. The back-branches are not changed, as they require this commit
that is too invasive for stable branches.
While on it, add test cases to exercise event triggers with identity
columns, and stress more cases with the event ddl_command_end for
relations.
Author: Sven Klemm, Aleksander Alekseev, Michael Paquier
Discussion: https://postgr.es/m/CAMCrgp2R1cEXU53iYKtW6yVEp2_yKUz+z=3-CTrYpPP+xryRtg@mail.gmail.com
Commit 2453ea142 redefined pg_proc.proargtypes to include the types of
OUT parameters, for procedures only. While that had some advantages
for implementing the SQL-spec behavior of DROP PROCEDURE, it was pretty
disastrous from a number of other perspectives. Notably, since the
primary key of pg_proc is name + proargtypes, this made it possible to
have multiple procedures with identical names + input arguments and
differing output argument types. That would make it impossible to call
any one of the procedures by writing just NULL (or "?", or any other
data-type-free notation) for the output argument(s). The change also
seems likely to cause grave confusion for client applications that
examine pg_proc and expect the traditional definition of proargtypes.
Hence, revert the definition of proargtypes to what it was, and
undo a number of complications that had been added to support that.
To support the SQL-spec behavior of DROP PROCEDURE, when there are
no argmode markers in the command's parameter list, we perform the
lookup both ways (that is, matching against both proargtypes and
proallargtypes), succeeding if we get just one unique match.
In principle this could result in ambiguous-function failures
that would not happen when using only one of the two rules.
However, overloading of procedure names is thought to be a pretty
rare usage, so this shouldn't cause many problems in practice.
Postgres-specific code such as pg_dump can defend against any
possibility of such failures by being careful to specify argmodes
for all procedure arguments.
This also fixes a few other bugs in the area of CALL statements
with named parameters, and improves the documentation a little.
catversion bump forced because the representation of procedures
with OUT arguments changes.
Discussion: https://postgr.es/m/3742981.1621533210@sss.pgh.pa.us
PersistHoldablePortal has long assumed that it should store the
entire output of the query-to-be-persisted, which requires rewinding
and re-reading the output. This is problematic if the query is not
stable: we might get different row contents, or even a different
number of rows, which'd confuse the cursor state mightily.
In the case where the cursor is NO SCROLL, this is very easy to
solve: just store the remaining query output, without any rewinding,
and tweak the portal's cursor state to match. Aside from removing
the semantic problem, this could be significantly more efficient
than storing the whole output.
If the cursor is scrollable, there's not much we can do, but it
was already the case that scrolling a volatile query's result was
pretty unsafe. We can just document more clearly that getting
correct results from that is not guaranteed.
There are already prohibitions in place on using SCROLL with
FOR UPDATE/SHARE, which is one way for a SELECT query to have
non-stable results. We could imagine prohibiting SCROLL when
the query contains volatile functions, but that would be
expensive to enforce. Moreover, it could break applications
that work just fine, if they have functions that are in fact
stable but the user neglected to mark them so. So settle for
documenting the hazard.
While this problem has existed in some guise for a long time,
it got a lot worse in v11, which introduced the possibility
of persisting plpgsql cursors (perhaps implicit ones) even
when they violate the rules for what can be marked WITH HOLD.
Hence, I've chosen to back-patch to v11 but not further.
Per bug #17050 from Алексей Булгаков.
Discussion: https://postgr.es/m/17050-f77aa827dc85247c@postgresql.org
The internal SQL queries used by REFRESH MATERIALIZED VIEW CONCURRENTLY
include some aliases for its diff and temporary relations with
rather-generic names: diff, newdata, newdata2 and mv. Depending on the
queries used for the materialized view, using CONCURRENTLY could lead to
some internal failures if the matview query and those internal aliases
conflict.
Those names have been chosen in 841c29c8. This commit switches instead
to a naming pattern which is less likely going to cause conflicts, based
on an idea from Thomas Munro, by appending _$ to those aliases. This is
not perfect as those new names could still conflict, but at least it has
the advantage to keep the code readable and simple while reducing the
likelihood of conflicts to be close to zero.
Reported-by: Mathis Rudolf
Author: Bharath Rupireddy
Reviewed-by: Bernd Helmle, Thomas Munro, Michael Paquier
Discussion: https://postgr.es/m/109c267a-10d2-3c53-b60e-720fcf44d9e8@credativ.de
Backpatch-through: 9.6
Redefine '\0' (InvalidCompressionMethod) as meaning "if we need to
compress, use the current setting of default_toast_compression".
This allows '\0' to be a suitable default choice regardless of
datatype, greatly simplifying code paths that initialize tupledescs
and the like. It seems like a more user-friendly approach as well,
because now the default compression choice doesn't migrate into table
definitions, meaning that changing default_toast_compression is
usually sufficient to flip an installation's behavior; one needn't
tediously issue per-column ALTER SET COMPRESSION commands.
Along the way, fix a few minor bugs and documentation issues
with the per-column-compression feature. Adopt more robust
APIs for SetIndexStorageProperties and GetAttributeCompression.
Bump catversion because typical contents of attcompression will now
be different. We could get away without doing that, but it seems
better to ensure v14 installations all agree on this. (We already
forced initdb for beta2, anyway.)
Discussion: https://postgr.es/m/626613.1621787110@sss.pgh.pa.us
The error messages, docs, and one of the options were using
'parallel degree' to indicate parallelism used by vacuum command. We
normally use 'parallel workers' at other places so change it for parallel
vacuum accordingly.
Author: Bharath Rupireddy
Reviewed-by: Dilip Kumar, Amit Kapila
Backpatch-through: 13
Discussion: https://postgr.es/m/CALj2ACWz=PYrrFXVsEKb9J1aiX4raA+UBe02hdRp_zqDkrWUiw@mail.gmail.com
Now that attcompression is just a char, there's a lot of wasted
padding space after it. Move it into the group of char-wide
columns to save a net of 4 bytes per pg_attribute entry. While
we're at it, swap the order of attstorage and attalign to make for
a more logical grouping of these columns.
Also re-order actions in related code to match the new field ordering.
This patch also fixes one outright bug: equalTupleDescs() failed to
compare attcompression. That could, for example, cause relcache
reload to fail to adopt a new value following a change.
Michael Paquier and Tom Lane, per a gripe from Andres Freund.
Discussion: https://postgr.es/m/20210517204803.iyk5wwvwgtjcmc5w@alap3.anarazel.de
If we redirected a replicated tuple operation into a partition child
table, and then tried to fire AFTER triggers for that event, the
relation cache entry for the child table was already closed. This has
no visible ill effects as long as the entry is still there and still
valid, but an unluckily-timed cache flush could result in a crash or
other misbehavior.
To fix, postpone the ExecCleanupTupleRouting call (which is what
closes the child table) until after we've fired triggers. This
requires a bit of refactoring so that the cleanup function can
have access to the necessary state.
In HEAD, I took the opportunity to simplify some of worker.c's
function APIs based on use of the new ApplyExecutionData struct.
However, it doesn't seem safe/practical to back-patch that aspect,
at least not without a lot of analysis of possible interactions
with a04daa97a.
In passing, add an Assert to afterTriggerInvokeEvents to catch
such cases. This seems worthwhile because we've grown a number
of fairly unstructured ways of calling AfterTriggerEndQuery.
Back-patch to v13, where worker.c grew the ability to deal with
partitioned target tables.
Discussion: https://postgr.es/m/3382681.1621381328@sss.pgh.pa.us