Add leader_pid to pg_stat_subscription. leader_pid is the process ID of
the leader apply worker if this process is a parallel apply worker. If
this field is NULL, it indicates that the process is a leader apply
worker or a synchronization worker. The new column makes it easier to
distinguish parallel apply workers from other kinds of workers and helps
to identify the leader for the parallel workers corresponding to a
particular subscription.
Additionally, update the leader_pid column in pg_stat_activity as well to
display the PID of the leader apply worker for parallel apply workers.
Author: Hou Zhijie
Reviewed-by: Peter Smith, Sawada Masahiko, Amit Kapila, Shveta Mallik
Discussion: https://postgr.es/m/CAA4eK1+wyN6zpaHUkCLorEWNx75MG0xhMwcFhvjqm2KURZEAGw@mail.gmail.com
Avoid use of "_" in SGML IDs. Awhile back that was actually
disallowed by the toolchain, as a consequence of which our convention
has been to use "-" instead. Fix a couple of stragglers that are
particularly inconsistent with that convention and with related IDs.
This is just neatnik-ism, so no need for back-patch.
Discussion: https://postgr.es/m/769446.1673478332@sss.pgh.pa.us
Entries of pg-user in pg_ident.conf that are quoted and include '\1'
allow a replacement from a subexpression in a system user regexp. This
commit adds a test to track this behavior and a note in the
documentation, as it could be affected by the use of an AuthToken for
the pg-user in the IdentLines parsed.
This subject has come up in the discussion aimed at extending the
support of pg-user in ident entries for more patterns.
Author: Jelte Fennema
Discussion: https://postgr.es/m/CAGECzQRNow4MwkBjgPxywXdJU_K3a9+Pm78JB7De3yQwwkTDew@mail.gmail.com
Add a cross-reference from the part of the page that introdues SECURITY
INVOKER and SECURITY DEFINER to the part of the page that talks about
writing SECURITY DEFINER functions safely, so that users are less likely
to miss it.
Remove discussion of the pre-8.3 behavior on the theory that it's
probably not very relevant any more, that release having gone out of
support nearly a decade ago.
Add a mention of the new createrole_self_grant GUC, which in
certain cases might need to be set to a safe value to avoid
unexpected consequences.
Possibly this section needs major surgery rather than just these
small tweaks, but hopefully this is at least a small step
forward.
Discussion: http://postgr.es/m/CA+Tgmoauqd1cHQjsNEoxL5O-kEO4iC9dAPyCudSvmNqPJGmy9g@mail.gmail.com
Update the reference pages for various ALTER commands that
mentioned that you must be a member of role that will be the
new owner to instead say that you must be able to SET ROLE
to the new owner. Update ddl.sgml's generate statement on this
topic along similar lines.
Likewise, update CREATE SCHEMA and CREATE DATABASE, which
have options to specify who will own the new objects, to say
that you must be able to SET ROLE to the role that will own
them.
Finally, update the documentation for the GRANT statement
itself with some general principles about how the SET option
works and how it can be used.
Patch by me, reviewed (but not fully endorsed) by Noah Misch.
Discussion: http://postgr.es/m/CA+TgmoZk6VB3DQ83+DO5P_HP=M9PQAh1yj-KgeV30uKefVaWDg@mail.gmail.com
Commit 60684dd8 left loose ends when it came to maintaining toast
tables or partitions.
For toast tables, simply skip the privilege check if the toast table
is an indirect target of the maintenance command, because the main
table privileges have already been checked.
For partitions, allow the maintenance command if the user has the
MAINTAIN privilege on the partition or any parent.
Also make CLUSTER emit "skipping" messages when the user doesn't have
privileges, similar to VACUUM.
Author: Nathan Bossart
Reported-by: Pavel Luzanov
Reviewed-by: Pavel Luzanov, Ted Yu
Discussion: https://postgr.es/m/20230113231339.GA2422750@nathanxps13
The prior behavior was confusing and hard to document. For instance,
if you had UPDATE privileges, you could lock a table in any lock mode
except ACCESS SHARE mode.
Now, if granted a privilege to lock at a given mode, one also has
privileges to lock at a less-conflicting mode. MAINTAIN, UPDATE,
DELETE, and TRUNCATE privileges allow any lock mode. INSERT privileges
allow ROW EXCLUSIVE (or below). SELECT privileges allow ACCESS SHARE.
Reviewed-by: Nathan Bossart
Discussion: https://postgr.es/m/9550c76535404a83156252b25a11babb4792ea1e.camel%40j-davis.com
As introduced in 2258e76, the docs were hard to parse:
- The examples used listed a lot of long records, bloating the output.
These are switched to show less records with the expanded format,
similarly to pageinspect.
- The function descriptions listed all the OUT parameters, producing
long lines. This is updated so as only the input parameters are
documented, clarifying the whole.
- Remove one example on pg_get_wal_stats() when per_record is set to
true, which is not really necessary once we know the output produced,
and the behavior of the parameter is documented.
While on it, fix a few grammar mistakes and simplify a couple of
sentences.
Author: Bharath Rupireddy
Discussion: https://postgr.es/m/CALj2ACVGcUpziGgQrcT-1G3dHWQQfWjYBu1YQ2ypv9y86dgogg@mail.gmail.com
Backpatch-through: 15
In both partitioning and traditional inheritance, require child
columns to be GENERATED if and only if their parent(s) are.
Formerly we allowed the case of an inherited column being
GENERATED when its parent isn't, but that results in inconsistent
behavior: the column can be directly updated through an UPDATE
on the parent table, leading to it containing a user-supplied
value that might not match the generation expression. This also
fixes an oversight that we enforced partition-key-columns-can't-
be-GENERATED against parent tables, but not against child tables
that were dynamically attached to them.
Also, remove the restriction that the child's generation expression
be equivalent to the parent's. In the wake of commit 3f7836ff6,
there doesn't seem to be any reason that we need that restriction,
since generation expressions are always computed per-table anyway.
By removing this, we can also allow a child to merge multiple
inheritance parents with inconsistent generation expressions, by
overriding them with its own expression, much as we've long allowed
for DEFAULT expressions.
Since we're rejecting a case that we used to accept, this doesn't
seem like a back-patchable change. Given the lack of field
complaints about the inconsistent behavior, it's likely that no
one is doing this anyway, but we won't change it in minor releases.
Amit Langote and Tom Lane
Discussion: https://postgr.es/m/2793383.1672944799@sss.pgh.pa.us
Can be set to the empty string, or to either or both of "set" or
"inherit". If set to a non-empty value, a non-superuser who creates
a role (necessarily by relying up the CREATEROLE privilege) will
grant that role back to themselves with the specified options.
This isn't a security feature, because the grant that this feature
triggers can also be performed explicitly. Instead, it's a user experience
feature. A superuser would necessarily inherit the privileges of any
created role and be able to access all such roles via SET ROLE;
with this patch, you can configure createrole_self_grant = 'set, inherit'
to provide a similar experience for a user who has CREATEROLE but not
SUPERUSER.
Discussion: https://postgr.es/m/CA+TgmobN59ct+Emmz6ig1Nua2Q-_o=r6DSD98KfU53kctq_kQw@mail.gmail.com
Previously, CREATEROLE users were permitted to make nearly arbitrary
changes to roles that they didn't create, with certain exceptions,
particularly superuser roles. Instead, allow CREATEROLE users to make such
changes to roles for which they possess ADMIN OPTION, and to
grant membership only in roles for which they possess ADMIN OPTION.
When a CREATEROLE user who is not a superuser creates a role, grant
ADMIN OPTION on the newly-created role to the creator, so that they
can administer roles they create or for which they have been given
privileges.
With these changes, CREATEROLE users still have very significant
powers that unprivileged users do not receive: they can alter, rename,
drop, comment on, change the password for, and change security labels
on roles. However, they can now do these things only for roles for
which they possess appropriate privileges, rather than all
non-superuser roles; moreover, they cannot grant a role such as
pg_execute_server_program unless they themselves possess it.
Patch by me, reviewed by Mark Dilger.
Discussion: https://postgr.es/m/CA+TgmobN59ct+Emmz6ig1Nua2Q-_o=r6DSD98KfU53kctq_kQw@mail.gmail.com
Currently, for large transactions, the publisher sends the data in
multiple streams (changes divided into chunks depending upon
logical_decoding_work_mem), and then on the subscriber-side, the apply
worker writes the changes into temporary files and once it receives the
commit, it reads from those files and applies the entire transaction. To
improve the performance of such transactions, we can instead allow them to
be applied via parallel workers.
In this approach, we assign a new parallel apply worker (if available) as
soon as the xact's first stream is received and the leader apply worker
will send changes to this new worker via shared memory. The parallel apply
worker will directly apply the change instead of writing it to temporary
files. However, if the leader apply worker times out while attempting to
send a message to the parallel apply worker, it will switch to
"partial serialize" mode - in this mode, the leader serializes all
remaining changes to a file and notifies the parallel apply workers to
read and apply them at the end of the transaction. We use a non-blocking
way to send the messages from the leader apply worker to the parallel
apply to avoid deadlocks. We keep this parallel apply assigned till the
transaction commit is received and also wait for the worker to finish at
commit. This preserves commit ordering and avoid writing to and reading
from files in most cases. We still need to spill if there is no worker
available.
This patch also extends the SUBSCRIPTION 'streaming' parameter so that the
user can control whether to apply the streaming transaction in a parallel
apply worker or spill the change to disk. The user can set the streaming
parameter to 'on/off', or 'parallel'. The parameter value 'parallel' means
the streaming will be applied via a parallel apply worker, if available.
The parameter value 'on' means the streaming transaction will be spilled
to disk. The default value is 'off' (same as current behaviour).
In addition, the patch extends the logical replication STREAM_ABORT
message so that abort_lsn and abort_time can also be sent which can be
used to update the replication origin in parallel apply worker when the
streaming transaction is aborted. Because this message extension is needed
to support parallel streaming, parallel streaming is not supported for
publications on servers < PG16.
Author: Hou Zhijie, Wang wei, Amit Kapila with design inputs from Sawada Masahiko
Reviewed-by: Sawada Masahiko, Peter Smith, Dilip Kumar, Shi yu, Kuroda Hayato, Shveta Mallik
Discussion: https://postgr.es/m/CAA4eK1+wyN6zpaHUkCLorEWNx75MG0xhMwcFhvjqm2KURZEAGw@mail.gmail.com
This allows an optional "S" modifier to be added to \dp and \z, to
have them include system objects in the list.
Note that this also changes the behaviour of a bare \dp or \z without
the "S" modifier to include temp objects in the list, and exclude
information_schema objects, making them consistent with other psql
meta-commands.
Nathan Bossart, reviewed by Maxim Orlov.
Discussion: https://postgr.es/m/20221206193606.GB3078082@nathanxps13
VACUUM normally ends by running vac_update_datfrozenxid(), which
requires a scan of pg_class. Therefore, if one attempts to vacuum a
database one table at a time --- as vacuumdb has done since v12 ---
we will spend O(N^2) time in vac_update_datfrozenxid(). That causes
serious performance problems in databases with tens of thousands of
tables, and indeed the effect is measurable with only a few hundred.
To add insult to injury, only one process can run
vac_update_datfrozenxid at the same time per DB, so this behavior
largely defeats vacuumdb's -j option.
Hence, invent options SKIP_DATABASE_STATS and ONLY_DATABASE_STATS
to allow applications to postpone vac_update_datfrozenxid() until the
end of a series of VACUUM requests, and teach vacuumdb to use them.
Per bug #17717 from Gunnar L. Sadly, this answer doesn't seem
like something we'd consider back-patching, so the performance
problem will remain in v12-v15.
Tom Lane and Nathan Bossart
Discussion: https://postgr.es/m/17717-6c50eb1c7d23a886@postgresql.org
In user-manag.sgml, document precisely what privileges are conveyed
by CREATEROLE. Make particular note of the fact that it allows
changing passwords and granting access to high-privilege roles.
Also remove the suggestion of using a user with CREATEROLE and
CREATEDB instead of a superuser, as there is no real security
advantage to this approach.
Elsewhere in the documentation, adjust text that suggests that
<literal>CREATEROLE</literal> only allows for role creation, and
refer to the documentation in user-manag.sgml as appropriate.
Patch by me, reviewed by Álvaro Herrera
Discussion: http://postgr.es/m/CA+TgmoZBsPL8nPhvYecx7iGo5qpDRqa9k_AcaW1SbOjugAY1Ag@mail.gmail.com
While on it, newlines are removed from the end of two elog() strings.
The others are simple grammar mistakes. One comment in pg_upgrade
referred incorrectly to sequences since a7e5457.
Author: Justin Pryzby
Discussion: https://postgr.es/m/20221230231257.GI1153@telsasoft.com
Backpatch-through: 11
This is like the existing bt_page_stats() function, but it can
report on a range of pages rather than just one at a time.
I don't have a huge amount of faith in the portability of the
new test cases, but they do pass in a 32-bit FreeBSD VM here.
Further adjustment may be needed depending on buildfarm results.
Hamid Akhtar, reviewed by Naeem Akhter, Bertrand Drouvot,
Bharath Rupireddy, and myself
Discussion: https://postgr.es/m/CANugjht-=oGMRmNJKMqnBC69y7vr+wHDmm0ZK6-1pJsxoBKBbA@mail.gmail.com
A refcursor variable that is bound to a specific query (by declaring
it with "CURSOR FOR") now chooses a portal name in the same way as an
unbound, plain refcursor variable. Its string value starts out as
NULL, and unless that's overridden by manual assignment, it will be
replaced by a unique-within-session portal name during OPEN.
The previous behavior was to initialize such variables to contain
their own name, resulting in that also being the portal name unless
the user overwrote it before OPEN. The trouble with this is that
it causes failures due to conflicting portal names if the same
cursor variable name is used in different functions. It is pretty
non-orthogonal to have bound and unbound refcursor variables behave
differently on this point, too, so let's change it.
This change can cause compatibility problems for applications that
open a bound cursor in a plpgsql function and then use it in the
calling code without explicitly passing back the refcursor value
(portal name). If the calling code simply assumes that the portal
name matches the called function's variable name, it will now fail.
That can be fixed by explicitly assigning a string value to the
refcursor variable before OPEN, e.g.
DECLARE myc CURSOR FOR SELECT ...;
BEGIN
myc := 'myc'; -- add this
OPEN myc;
We have no documentation examples showing the troublesome usage
pattern, so we can hope it's rare in practice.
Patch by me; thanks to Pavel Stehule and Jan Wieck for review.
Discussion: https://postgr.es/m/1465101.1667345983@sss.pgh.pa.us
When collecting ANALYZE sample on foreign tables, postgres_fdw fetched
all rows and performed the sampling locally. For large tables this means
transferring and immediately discarding large amounts of data.
This commit allows the sampling to be performed on the remote server,
transferring only the much smaller sample. The sampling is performed
using the built-in TABLESAMPLE methods (system, bernoulli) or random()
function, depending on the remote server version.
Remote sampling can be enabled by analyze_sampling on the foreign server
and/or foreign table, with supported values 'off', 'auto', 'system',
'bernoulli' and 'random'. The default value is 'auto' which uses either
'bernoulli' (TABLESAMPLE method) or 'random' (for remote servers without
TABLESAMPLE support).
Teach VACUUM to decide on whether or not to trigger freezing at the
level of whole heap pages. Individual XIDs and MXIDs fields from tuple
headers now trigger freezing of whole pages, rather than independently
triggering freezing of each individual tuple header field.
Managing the cost of freezing over time now significantly influences
when and how VACUUM freezes. The overall amount of WAL written is the
single most important freezing related cost, in general. Freezing each
page's tuples together in batch allows VACUUM to take full advantage of
the freeze plan WAL deduplication optimization added by commit 9e540599.
Also teach VACUUM to trigger page-level freezing whenever it detects
that heap pruning generated an FPI. We'll have already written a large
amount of WAL just to do that much, so it's very likely a good idea to
get freezing out of the way for the page early. This only happens in
cases where it will directly lead to marking the page all-frozen in the
visibility map.
In most cases "freezing a page" removes all XIDs < OldestXmin, and all
MXIDs < OldestMxact. It doesn't quite work that way in certain rare
cases involving MultiXacts, though. It is convenient to define "freeze
the page" in a way that gives FreezeMultiXactId the leeway to put off
the work of processing an individual tuple's xmax whenever it happens to
be a MultiXactId that would require an expensive second pass to process
aggressively (allocating a new multi is especially worth avoiding here).
FreezeMultiXactId is eager when processing is cheap (as it usually is),
and lazy in the event of an individual multi that happens to require
expensive second pass processing. This avoids regressions related to
processing of multis that page-level freezing might otherwise cause.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Jeff Davis <pgsql@j-davis.com>
Reviewed-By: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CAH2-WzkFok_6EAHuK39GaW4FjEFQsY=3J0AAd6FXk93u-Xq3Fg@mail.gmail.com
Given the soft-input-error feature, we can reduce these functions
to be just thin wrappers around a soft-error call of the
corresponding datatype input function. This means less code and
more certainty that the to_reg* functions match the normal input
behavior.
Notably, it also means that they will accept numeric OID input,
which they didn't before. It's not clear to me if that omission
had more than laziness behind it, but it doesn't seem like
something we need to work hard to preserve.
Discussion: https://postgr.es/m/3910031.1672095600@sss.pgh.pa.us
This option extracts (potentially decompressing) full-page images
included in WAL records into a given target directory. These images are
subject to the same filtering rules as the normal display of the WAL
records, hence with --relation one can for example extract only the FPIs
issued on the relation defined. By default, the records are printed or
their stats computed (--stats), using --quiet would only save the images
without any output generated.
This is a tool aimed mostly for very experienced users, useful for
fixing page-level corruption or just analyzing the past state of a page,
and there were no easy way to do that with the in-core tools up to now
when looking at WAL.
Each block is saved in a separate file, to ease their manipulation, with
the file respecting <lsn>.<ts>.<db>.<rel>.<blk>_<fork> with as format.
For instance, 00000000-010000C0.1663.1.6117.123_main refers to:
- WAL record LSN in hexa format (00000000-010000C0).
- Tablespace OID (1663).
- Database OID (1).
- Relfilenode (6117).
- Block number (123).
- Fork name of the file this block came from (_main).
Author: David Christensen
Reviewed-by: Sho Kato, Justin Pryzby, Bharath Rupireddy, Matthias van de
Meent
Discussion: https://postgr.es/m/CAOxo6XKjQb2bMSBRpePf3ZpzfNTwjQUc4Tafh21=jzjX6bX8CA@mail.gmail.com
This enables streaming or serializing changes immediately in logical
decoding. This parameter is intended to be used to test logical decoding
and replication of large transactions for which otherwise we need to
generate the changes till logical_decoding_work_mem is reached.
This helps in reducing the timing of existing tests related to logical
replication of in-progress transactions and will help in writing tests for
for the upcoming feature for parallelly applying large in-progress
transactions.
Author: Shi yu
Reviewed-by: Sawada Masahiko, Shveta Mallik, Amit Kapila, Dilip Kumar, Kuroda Hayato, Kyotaro Horiguchi
Discussion: https://postgr.es/m/OSZPR01MB63104E7449DBE41932DB19F1FD1B9@OSZPR01MB6310.jpnprd01.prod.outlook.com
After some copy-edit I made in commit 3a06a79cd137, we have a <sect2>
that only contains a warning box. This doesn't look good. Rework by
moving the sect2 title to be the warning's title, and put the 'id' to it
as well, so that the external reference continues to work.
Backpatch to 15.
In branch master, I also take the opportunity to add titles to a couple
of other warning boxes elsewhere in the documentation.
Discussion: https://postgr.es/m/20221219164713.ccnlvtkyj6lmshqq@alvherre.pgsql
Commit 2f9661311b changed command tags from strings to numbers, but
forgot to adjust the code in the event trigger example, which
consequently failed to compile.
While fixing that, improve the indentation to adhere to pgindent style.
Backpatch to v13, where the change was introduced.
Author: Laurenz Albe
Discussion: https://postgr.es/m/81e36ac17dc80489e74dc5b6914afa6ccdb1a99d.camel@cybertec.at
The former name was discussed as being confusing, so use "split", as per
a suggestion from Magnus Hagander.
While on it, one of the output arguments is renamed from "segno" to
"segment_number", as per a suggestion from Kyotaro Horiguchi.
The documentation is updated to reflect all these changes.
Bump catalog version.
Author: Bharath Rupireddy, Michael Paquier
Discussion: https://postgr.es/m/CABUevEytQVaOOhGdoh0D7hGwe3fuKcRF6NthsSW7ww04EmtFgQ@mail.gmail.com
1349d279 added query planner support to allow more efficient execution of
aggregate functions which have an ORDER BY or a DISTINCT clause. Prior to
that commit, the planner would only request that the lower planner produce
a plan with the order required for the GROUP BY clause and it would be
left up to nodeAgg.c to perform the final sort of records within each
group so that the aggregate transition functions were called in the
correct order. Now that the planner requests the lower planner produce a
plan with the GROUP BY and the ORDER BY / DISTINCT aggregates in mind,
there is the possibility that the planner chooses a plan which could be
less efficient than what would have been produced before 1349d279.
While developing 1349d279, I had in mind that Incremental Sort would help
us in cases where an index exists only on the GROUP BY column(s).
Incremental Sort would just replace the implicit tuplesorts which are
being performed in nodeAgg.c. However, because the planner has the
flexibility to instead choose a plan which just performs a full sort on
both the GROUP BY and ORDER BY / DISTINCT aggregate columns, there is
potential for the planner to make a bad choice. The costing for
Incremental Sort is not perfect as it assumes an even distribution of rows
to sort within each sort group.
Here we add an escape hatch in the form of the enable_presorted_aggregate
GUC. This will allow users to get the pre-PG16 behavior in cases where
they have no other means to convince the query planner to produce a plan
which only sorts on the GROUP BY column(s).
Discussion: https://postgr.es/m/CAApHDvr1Sm+g9hbv4REOVuvQKeDWXcKUAhmbK5K+dfun0s9CvA@mail.gmail.com
This function takes in input a WAL segment name and returns a tuple made
of the segment sequence number (dependent on the WAL segment size of the
cluster) and its timeline, as of a thin SQL wrapper around the existing
XLogFromFileName().
This function has multiple usages, like being able to compile a LSN from
a file name and an offset, or finding the timeline of a segment without
having to do to some maths based on the first eight characters of the
segment.
Bump catalog version.
Author: Bharath Rupireddy
Reviewed-by: Nathan Bossart, Kyotaro Horiguchi, Maxim Orlov, Michael
Paquier
Discussion: https://postgr.es/m/CALj2ACWV=FCddsxcGbVOA=cvPyMr75YCFbSQT6g4KDj=gcJK4g@mail.gmail.com
A new function pg_stat_get_backend_subxact() can be used to get
information about the number of subtransactions in the cache of
a particular backend and whether that cache has overflowed. This
can be useful for tracking down performance problems that can
result from overflowed snapshots.
Dilip Kumar, reviewed by Zhihong Yu, Nikolay Samokhvalov,
Justin Pryzby, Nathan Bossart, Ashutosh Sharma, Julien
Rouhaud. Additional design comments from Andres Freund,
Tom Lane, Bruce Momjian, and David G. Johnston.
Discussion: http://postgr.es/m/CAFiTN-ut0uwkRJDQJeDPXpVyTWD46m3gt3JDToE02hTfONEN=Q@mail.gmail.com
This option selects the default transfer mode. Having an explicit
option is handy to make scripts and tests more explicit. It also
makes it easier to talk about a "copy" mode rather than "the default
mode" or something like that, since until now the default mode didn't
have an externally visible name.
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/50a97009-8ff9-ca4d-a0f6-6086a6775a5b%40enterprisedb.com
Add support for hexadecimal, octal, and binary integer literals:
0x42F
0o273
0b100101
per SQL:202x draft.
This adds support in the lexer as well as in the integer type input
functions.
Reviewed-by: John Naylor <john.naylor@enterprisedb.com>
Reviewed-by: Zhihong Yu <zyu@yugabyte.com>
Reviewed-by: David Rowley <dgrowleyml@gmail.com>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/b239564c-cad0-b23e-c57e-166d883cb97d@enterprisedb.com
Commits f92944137 et al. made IsInTransactionBlock() set the
XACT_FLAGS_NEEDIMMEDIATECOMMIT flag before returning "false",
on the grounds that that kept its API promises equivalent to those of
PreventInTransactionBlock(). This turns out to be a bad idea though,
because it allows an ANALYZE in a pipelined series of commands to
cause an immediate commit, which is unexpected.
Furthermore, if we return "false" then we have another issue,
which is that ANALYZE will decide it's allowed to do internal
commit-and-start-transaction sequences, thus possibly unexpectedly
committing the effects of previous commands in the pipeline.
To fix the latter situation, invent another transaction state flag
XACT_FLAGS_PIPELINING, which explicitly records the fact that we
have executed some extended-protocol command and not yet seen a
commit for it. Then, require that flag to not be set before allowing
InTransactionBlock() to return "false".
Having done that, we can remove its setting of NEEDIMMEDIATECOMMIT
without fear of causing problems. This means that the API guarantees
of IsInTransactionBlock now diverge from PreventInTransactionBlock,
which is mildly annoying, but it seems OK given the very limited usage
of IsInTransactionBlock. (In any case, a caller preferring the old
behavior could always set NEEDIMMEDIATECOMMIT for itself.)
For consistency also require XACT_FLAGS_PIPELINING to not be set
in PreventInTransactionBlock. This too is meant to prevent commands
such as CREATE DATABASE from silently committing previous commands
in a pipeline.
Per report from Peter Eisentraut. As before, back-patch to all
supported branches (which sadly no longer includes v10).
Discussion: https://postgr.es/m/65a899dd-aebc-f667-1d0a-abb89ff3abf8@enterprisedb.com
Add some cross-links between chapter "20. Server Parameters" and
"31. Logical Replication" regarding the available configuration
parameters, for easier navigation; and some more explanatory text too.
I (Álvaro) chose to duplicate max_replication_slots in Chapter 20,
because it has completely different meanings at each side of the
replication link.
Author: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: vignesh C <vignesh21@gmail.com>
Reviewed-by: samay sharma <smilingsamay@gmail.com>
Discussion: https://postgr.es/m/CAHut+PsESqpy7w3Y6cX98c255ZuCjvipkhKjy6hZBjOv4E6iJA@mail.gmail.com
This is straightforward as far as it goes. However, it does not
attempt to trap errors occurring during the execution of domain
CHECK constraints. Since those are general user-defined
expressions, the only way to do that would involve starting up a
subtransaction for each check. Of course the entire point of
the soft-errors feature is to not need subtransactions, so that
would be self-defeating. For now, we'll rely on the assumption
that domain checks are written to avoid throwing errors.
Discussion: https://postgr.es/m/1181028.1670635727@sss.pgh.pa.us
pg_input_is_valid() returns boolean, while pg_input_error_message()
returns the primary error message if the input is bad, or NULL
if the input is OK. The main reason for having two functions is
so that we can test both the details-wanted and the no-details-wanted
code paths.
Although these are primarily designed with testing in mind,
it could well be that they'll be useful to end users as well.
This patch is mostly by me, but it owes very substantial debt to
earlier work by Nikita Glukhov, Andrew Dunstan, and Amul Sul.
Thanks to Andres Freund for review.
Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
Postgres' standard mechanism for reporting errors (ereport() or elog())
is used for all sorts of error conditions. This means that throwing
an exception via ereport(ERROR) requires an expensive transaction or
subtransaction abort and cleanup, since the exception catcher dare not
make many assumptions about what has gone wrong. There are situations
where we would rather have a lighter-weight mechanism for dealing
with errors that are known to be safe to recover from without a full
transaction cleanup. This commit creates infrastructure to let us
adapt existing error-reporting code for that purpose. See the
included documentation changes for details. Follow-on commits will
provide test code and usage examples.
The near-term plan is to convert most if not all datatype input
functions to report invalid input "softly". This will enable
implementing some SQL/JSON features cleanly and without the cost
of subtransactions, and it will also allow creating COPY options
to deal with bad input without cancelling the whole COPY.
This patch is mostly by me, but it owes very substantial debt to
earlier work by Nikita Glukhov, Andrew Dunstan, and Amul Sul.
Thanks also to Andres Freund for review.
Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru