I noticed that there were exactly no complete examples of use of
a transition table in a trigger function, and no clear description
of just how you'd do it either. Improve that.
The standard says that all changes of the same kind (insert, update, or
delete) caused in one table by a single SQL statement should be reported
in a single transition table; and by that, they mean to include foreign key
enforcement actions cascading from the statement's direct effects. It's
also reasonable to conclude that if the standard had wCTEs, they would say
that effects of wCTEs applying to the same table as each other or the outer
statement should be merged into one transition table. We weren't doing it
like that.
Hence, arrange to merge tuples from multiple update actions into a single
transition table as much as we can. There is a problem, which is that if
the firing of FK enforcement triggers and after-row triggers with
transition tables is interspersed, we might need to report more tuples
after some triggers have already seen the transition table. It seems like
a bad idea for the transition table to be mutable between trigger calls.
There's no good way around this without a major redesign of the FK logic,
so for now, resolve it by opening a new transition table each time this
happens.
Also, ensure that AFTER STATEMENT triggers fire just once per statement,
or once per transition table when we're forced to make more than one.
Previous versions of Postgres have allowed each FK enforcement query
to cause an additional firing of the AFTER STATEMENT triggers for the
referencing table, but that's certainly not per spec. (We're still
doing multiple firings of BEFORE STATEMENT triggers, though; is that
something worth changing?)
Also, forbid using transition tables with column-specific UPDATE triggers.
The spec requires such transition tables to show only the tuples for which
the UPDATE trigger would have fired, which means maintaining multiple
transition tables or else somehow filtering the contents at readout.
Maybe someday we'll bother to support that option, but it looks like a
lot of trouble for a marginal feature.
The transition tables are now managed by the AfterTriggers data structures,
rather than being directly the responsibility of ModifyTable nodes. This
removes a subtransaction-lifespan memory leak introduced by my previous
band-aid patch 3c4359521.
In passing, refactor the AfterTriggers data structures to reduce the
management overhead for them, by using arrays of structs rather than
several parallel arrays for per-query-level and per-subtransaction state.
I failed to resist the temptation to do some copy-editing on the SGML
docs about triggers, above and beyond merely documenting the effects
of this patch.
Back-patch to v10, because we don't want the semantics of transition
tables to change post-release.
Patch by me, with help and review from Thomas Munro.
Discussion: https://postgr.es/m/20170909064853.25630.12825@wrigleys.postgresql.org
Document that rsync is an _optional_ way to upgrade standbys, suggest
rsync option --dry-run, and mention a way of upgrading one standby from
another using rsync. Also clarify some instructions by specifying if
they operate on the old or new clusters.
Reported-by: Stephen Frost, Magnus Hagander
Discussion: https://postgr.es/m/20170914191250.GB6595@momjian.us
Backpatch-through: 9.5
The documentation claimed that one should send
"pg_same_as_startup_message" as the user name in the SCRAM messages, but
this did not match the actual implementation, so remove it.
This makes it clear that pg_upgrade standby upgrade instructions should
only be used in link mode, adds examples, and explains how rsync works
with links.
Reported-by: Andreas Joseph Krogh
Discussion: https://postgr.es/m/VisenaEmail.6c.c0e592c5af4ef0a2.15e785dcb61@tc7-visena
Backpatch-through: 9.5
We already had a psql variable VERSION that shows the verbose form of
psql's own version. Add VERSION_NAME to show the short form (e.g.,
"11devel") and VERSION_NUM to show the numeric form (e.g., 110000).
Also add SERVER_VERSION_NAME and SERVER_VERSION_NUM to show the short and
numeric forms of the server's version. (We'd probably add SERVER_VERSION
with the verbose string if it were readily available; but adding another
network round trip to get it seems too expensive.)
The numeric forms, in particular, are expected to be useful for scripting
purposes, now that psql can do conditional tests.
Back-patch of commit 9ae9d8c1549c384dbdb8363e1d932b7311d25c56.
Fabien Coelho, reviewed by Pavel Stehule
Discussion: https://postgr.es/m/alpine.DEB.2.20.1704020917220.4632@lancre
Commit 9915de6c1cb2 changed the default behavior of
DROP_REPLICATION_SLOT so that it would wait until any session holding
the slot active would release it, instead of raising an error. But
users are already depending on the original behavior, so revert to it by
default and add a WAIT option to invoke the new behavior.
Per complaint from Simone Gotti, in
Discussion: https://postgr.es/m/CAEvsy6Wgdf90O6pUvg2wSVXL2omH5OPC-38OD4Zzgk-FXavj3Q@mail.gmail.com
Previously, the parallel executor logic did reinitialization of shared
state within the ExecReScan code for parallel-aware scan nodes. This is
problematic, because it means that the ExecReScan call has to occur
synchronously (ie, during the parent Gather node's ReScan call). That is
swimming very much against the tide so far as the ExecReScan machinery is
concerned; the fact that it works at all today depends on a lot of fragile
assumptions, such as that no plan node between Gather and a parallel-aware
scan node is parameterized. Another objection is that because ExecReScan
might be called in workers as well as the leader, hacky extra tests are
needed in some places to prevent unwanted shared-state resets.
Hence, let's separate this code into two functions, a ReInitializeDSM
call and the ReScan call proper. ReInitializeDSM is called only in
the leader and is guaranteed to run before we start new workers.
ReScan is returned to its traditional function of resetting only local
state, which means that ExecReScan's usual habits of delaying or
eliminating child rescan calls are safe again.
As with the preceding commit 7df2c1f8d, it doesn't seem to be necessary
to make these changes in 9.6, which is a good thing because the FDW and
CustomScan APIs are impacted.
Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com
The formatting of the sidebar element didn't carry over to the new tool
chain. Instead of inventing a whole new way of dealing with it, just
convert the one use to a "note".
As long as PQntuples, PQgetvalue, etc, use "int" for row numbers, we're
pretty much stuck with this limitation. The documentation formerly stated
that the result of PQntuples "might overflow on 32-bit operating systems",
which is just nonsense: that's not where the overflow would happen, and
if you did reach an overflow it would not be on a 32-bit machine, because
you'd have OOM'd long since.
Discussion: https://postgr.es/m/CA+FnnTxyLWyjY1goewmJNxC==HQCCF4fKkoCTa9qR36oRAHDPw@mail.gmail.com
Our documentation hasn't really caught up with the fact that
non-exclusive backups can now be taken using pg_start_backup and
pg_stop_backup even on standbys. Update, also correcting some
errors introduced by 52f8a59dd953c6820baf153e97cf07d31b8ac1d6.
Updates to the 9.6 documentation are needed as well, but that
will need a separate patch as some things are different on that
version.
David Steele, reviewed by Robert Haas and Michael Paquier
Discussion: http://postgr.es/m/d4d951b9-89c0-6bc1-b6ff-d0b2dd5a8966@pgmasters.net
Users can still create them themselves. Instead, document Unicode TR 35
collation options for ICU, so users can create all this themselves.
Reviewed-by: Peter Geoghegan <pg@bowt.ie>
Install language+region combinations even if they are not distinct from
the language's base locale. This gives better long-term stability of
the set of predefined locales and makes the predefined locales less
implementation-dependent and more practical for users.
Reviewed-by: Peter Geoghegan <pg@bowt.ie>
This became possible by commit
6c2003f8a1bbc7c192a2e83ec51581c018aa162f. This just makes pg_dump aware
of it and updates the documentation.
Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
Also create PDF bookmarks/ToC entries for subsections of reference
pages. This was a regression from the previous jadetex-based build.
Reported-by: Erik Rijkers <er@xs4all.nl>
Since PostgreSQL 9.6, rolreplication no longer determines whether a role
can run pg_start_backup() and pg_stop_backup(), so remove that.
Add that this attribute determines whether a role can create and drop
replication slots.
Reported-by: Fujii Masao <masao.fujii@gmail.com>
Many places that mentioned only Gather should also mention Gather
Merge, or should be phrased in a more neutral way. Be more clear
about the fact that max_parallel_workers_per_gather affects the number
of workers the planner may want to use. Fix a typo. Explain how
Gather Merge works. Adjust wording around parallel scans to be a bit
more clear. Adjust wording around parallel-restricted operations for
the fact that uncorrelated subplans are no longer restricted.
Patch by me, reviewed by Erik Rijkers
Discussion: http://postgr.es/m/CA+TgmoZsTjgVGn=ei5ht-1qGFKy_m1VgB3d8+Rg304hz91N5ww@mail.gmail.com
Similar to what was fixed in commit 9915de6c1cb2 for replication slots,
but this time it's related to replication origins: DROP SUBSCRIPTION
attempts to drop the replication origin, but that fails if the
replication worker process hasn't yet marked it unused. This causes
failures in the buildfarm:
ERROR: could not drop replication origin with OID 1, in use by PID 34069
Like the aforementioned commit, fix by having the process running DROP
SUBSCRIPTION sleep until the worker marks the the replication origin
struct as free. This uses a condition variable on each replication
origin shmem state struct, so that the session trying to drop can sleep
and expect to be awakened by the process keeping the origin open.
Also fix a SGML markup in the previous commit.
Discussion: https://postgr.es/m/20170808001433.rozlseaf4m2wkw3n@alvherre.pgsql
In commit 9915de6c1cb2, we introduced a new wait point for replication
slots and incorrectly labelled it as wait event PG_WAIT_LOCK. That's
wrong, so invent an appropriate new wait event instead, and document it
properly.
While at it, fix numerous other problems in the vicinity:
- two different walreceiver wait events were being mixed up in a single
wait event (which wasn't documented either); split it out so that they
can be distinguished, and document the new events properly.
- ParallelBitmapPopulate was documented but didn't exist.
- ParallelBitmapScan was not documented (I think this should be called
"ParallelBitmapScanInit" instead.)
- Logical replication wait events weren't documented
- various symbols had been added in dartboard order in various places.
Put them in alphabetical order instead, as was originally intended.
Discussion: https://postgr.es/m/20170808181131.mu4fjepuh5m75cyq@alvherre.pgsql