This patch creates a new extension property, "trusted". An extension
that's marked that way in its control file can be installed by a
non-superuser who has the CREATE privilege on the current database,
even if the extension contains objects that normally would have to be
created by a superuser. The objects within the extension will (by
default) be owned by the bootstrap superuser, but the extension itself
will be owned by the calling user. This allows replicating the old
behavior around trusted procedural languages, without all the
special-case logic in CREATE LANGUAGE. We have, however, chosen to
loosen the rules slightly: formerly, only a database owner could take
advantage of the special case that allowed installation of a trusted
language, but now anyone who has CREATE privilege can do so.
Having done that, we can delete the pg_pltemplate catalog, moving the
knowledge it contained into the extension script files for the various
PLs. This ends up being no change at all for the in-core PLs, but it is
a large step forward for external PLs: they can now have the same ease
of installation as core PLs do. The old "trusted PL" behavior was only
available to PLs that had entries in pg_pltemplate, but now any
extension can be marked trusted if appropriate.
This also removes one of the stumbling blocks for our Python 2 -> 3
migration, since the association of "plpythonu" with Python 2 is no
longer hard-wired into pg_pltemplate's initial contents. Exactly where
we go from here on that front remains to be settled, but one problem
is fixed.
Patch by me, reviewed by Peter Eisentraut, Stephen Frost, and others.
Discussion: https://postgr.es/m/5889.1566415762@sss.pgh.pa.us
Commit 40d964ec99 allowed vacuum command to leverage multiple CPUs by
invoking parallel workers to process indexes. This commit provides a
'--parallel' option to specify the parallel degree used by vacuum command.
Author: Masahiko Sawada, with few modifications by me
Reviewed-by: Mahendra Singh and Amit Kapila
Discussion: https://postgr.es/m/CAD21AoDTPMgzSkV4E3SFo1CH_x50bf5PqZFQf4jmqjk-C03BWg@mail.gmail.com
Only the parameter parallel_workers can be used directly with ALTER
TABLE.
Issue introduced in 6f3a13f, so backpatch down to 10.
Author: Justin Pryzby
Discussion: https://postgr.es/m/20200106025623.GA12066@telsasoft.com
Backpatch-through: 10
Attempting to use CREATE INDEX, DROP INDEX or REINDEX with CONCURRENTLY
on a temporary relation with ON COMMIT actions triggered unexpected
errors because those operations use multiple transactions internally to
complete their work. Here is for example one confusing error when using
ON COMMIT DELETE ROWS:
ERROR: index "foo" already contains data
Issues related to temporary relations and concurrent indexing are fixed
in this commit by enforcing the non-concurrent path to be taken for
temporary relations even if using CONCURRENTLY, transparently to the
user. Using a non-concurrent path does not matter in practice as locks
cannot be taken on a temporary relation by a session different than the
one owning the relation, and the non-concurrent operation is more
effective.
The problem exists with REINDEX since v12 with the introduction of
CONCURRENTLY, and with CREATE/DROP INDEX since CONCURRENTLY exists for
those commands. In all supported versions, this caused only confusing
error messages to be generated. Note that with REINDEX, it was also
possible to issue a REINDEX CONCURRENTLY for a temporary relation owned
by a different session, leading to a server crash.
The idea to enforce transparently the non-concurrent code path for
temporary relations comes originally from Andres Freund.
Reported-by: Manuel Rigger
Author: Michael Paquier, Heikki Linnakangas
Reviewed-by: Andres Freund, Álvaro Herrera, Heikki Linnakangas
Discussion: https://postgr.es/m/CA+u7OA6gP7YAeCguyseusYcc=uR8+ypjCcgDDCTzjQ+k6S9ksQ@mail.gmail.com
Backpatch-through: 9.4
The behavior of something like
ALTER TABLE transactions
ADD COLUMN status varchar(30) DEFAULT 'old',
ALTER COLUMN status SET default 'current';
is to fill existing table rows with 'old', not 'current'. That's
intentional and desirable for a couple of reasons:
* It makes the behavior the same whether you merge the sub-commands
into one ALTER command or give them separately;
* If we applied the new default while filling the table, there would
be no way to get the existing behavior in one SQL command.
The same reasoning applies in cases that add a column and then
manipulate its GENERATED/IDENTITY status in a second sub-command,
since the generation expression is really just a kind of default.
However, that wasn't very obvious (at least not to me; earlier in
the referenced discussion thread I'd thought it was a bug to be
fixed). And it certainly wasn't documented.
Hence, add documentation, code comments, and a test case to clarify
that this behavior is all intentional.
In passing, adjust ATExecAddColumn's defaults-related relkind check
so that it matches up exactly with ATRewriteTables, instead of being
effectively (though not literally) the negated inverse condition.
The reasoning can be explained a lot more concisely that way, too
(not to mention that the comment now matches the code, which it
did not before).
Discussion: https://postgr.es/m/10365.1558909428@sss.pgh.pa.us
This feature allows the vacuum to leverage multiple CPUs in order to
process indexes. This enables us to perform index vacuuming and index
cleanup with background workers. This adds a PARALLEL option to VACUUM
command where the user can specify the number of workers that can be used
to perform the command which is limited by the number of indexes on a
table. Specifying zero as a number of workers will disable parallelism.
This option can't be used with the FULL option.
Each index is processed by at most one vacuum process. Therefore parallel
vacuum can be used when the table has at least two indexes.
The parallel degree is either specified by the user or determined based on
the number of indexes that the table has, and further limited by
max_parallel_maintenance_workers. The index can participate in parallel
vacuum iff it's size is greater than min_parallel_index_scan_size.
Author: Masahiko Sawada and Amit Kapila
Reviewed-by: Dilip Kumar, Amit Kapila, Robert Haas, Tomas Vondra,
Mahendra Singh and Sergei Kornilov
Tested-by: Mahendra Singh and Prabhat Sahu
Discussion:
https://postgr.es/m/CAD21AoDTPMgzSkV4E3SFo1CH_x50bf5PqZFQf4jmqjk-C03BWg@mail.gmail.comhttps://postgr.es/m/CAA4eK1J-VoR9gzS5E75pcD-OH0mEyCdp8RihcwKrcuw7J-Q0+w@mail.gmail.com
Column defaults may be specified separately for each partition.
But INSERT via a partitioned table ignores those partition's default values.
The former is documented, but the latter restriction not.
This commit adds the note about that restriction into the document.
Author: Fujii Masao
Reviewed-by: Amit Langote
Discussion: https://postgr.es/m/CAHGQGwEs-59omrfGF7hOHz9iMME3RbKy5ny+iftDx3LHTEn9sA@mail.gmail.com
Up to now, whatever you'd edited was put back into the query buffer
but not redisplayed, which is less than user-friendly. But we can
improve that just by printing the text along with a prompt, if we
enforce that the editing result ends with a newline (which it
typically would anyway). You then continue typing more lines if
you want, or you can type ";" or do \g or \r or another \e.
This is intentionally divorced from readline's processing,
for simplicity and so that it works the same with or without
readline enabled. We discussed possibly integrating things
more closely with readline; but that seems difficult, uncertainly
portable across different readline and libedit versions, and
of limited real benefit anyway. Let's try the simple way and
see if it's good enough.
Patch by me, thanks to Fabien Coelho and Laurenz Albe for review
Discussion: https://postgr.es/m/13192.1572318028@sss.pgh.pa.us
Specifying '-f' will add the 'force' option to the DROP DATABASE command
sent to the server. This will try to terminate all existing connections
to the target database before dropping it.
Author: Pavel Stehule
Reviewed-by: Vignesh C and Amit Kapila
Discussion: https://postgr.es/m/CAP_rwwmLJJbn70vLOZFpxGw3XD7nLB_7+NKz46H5EOO2k5H7OQ@mail.gmail.com
The existing text stated that "Default privileges that are specified
per-schema are added to whatever the global default privileges are for
the particular object type". However, that bare-bones observation is
not quite clear enough, as demonstrated by the complaint in bug #16124.
Flesh it out by stating explicitly that you can't revoke built-in
default privileges this way, and by providing an example to drive
the point home.
Back-patch to all supported branches, since it's been like this
from the beginning.
Discussion: https://postgr.es/m/16124-423d8ee4358421bc@postgresql.org
This new option terminates the other sessions connected to the target
database and then drop it. To terminate other sessions, the current user
must have desired permissions (same as pg_terminate_backend()). We don't
allow to terminate the sessions if prepared transactions, active logical
replication slots or subscriptions are present in the target database.
Author: Pavel Stehule with changes by me
Reviewed-by: Dilip Kumar, Vignesh C, Ibrar Ahmed, Anthony Nowocien,
Ryan Lambert and Amit Kapila
Discussion: https://postgr.es/m/CAP_rwwmLJJbn70vLOZFpxGw3XD7nLB_7+NKz46H5EOO2k5H7OQ@mail.gmail.com
This commit allows --init-steps option in pgbench to accept "G" character
meaning server-side data generation as an initialization step.
With "G", only limited queries are sent from pgbench client and
then data is actually generated in the server. This might make
the initialization phase faster if the bandwidth between pgbench client
and the server is low.
Author: Fabien Coelho
Reviewed-by: Anna Endo, Ibrar Ahmed, Fujii Masao
Discussion: https://postgr.es/m/alpine.DEB.2.21.1904061826420.3678@lancre
Since 898e5e32, this command uses partially ShareUpdateExclusiveLock,
but the docs did not get the call.
Author: Justin Pryzby
Reviewed-by: Amit Langote, Álvaro Herrera, Michael Paquier
Discussion: https://postgr.es/m/20191028001207.GB23808@telsasoft.com
Backpatch-through: 12
This clarifies more how to use and how to take advantage of constraints
when attaching a new partition.
Author: Justin Pryzby
Reviewed-by: Amit Langote, Álvaro Herrera, Michael Paquier
Discussion: https://postgr.es/m/20191028001207.GB23808@telsasoft.com
Backpatch-through: 10
Using glibc's version string to detect potential collation definition
changes is not 100% reliable, but it's better than nothing. Currently
this affects only collations explicitly provided by "libc". More work
will be needed to handle the default collation.
Author: Thomas Munro, based on a suggestion from Christoph Berg
Reviewed-by: Peter Eisentraut
Discussion: https://postgr.es/m/4b76c6d4-ae5e-0dc6-7d0d-b5c796a07e34%402ndquadrant.com
This fixes multiple areas of the documentation:
- COPY for its past compatibility section.
- SET ROLE mentioning INHERITS instead of INHERIT
- PREPARE referring to stmt_name, that is not present.
- Extension documentation about format name with upgrade scripts.
Backpatch down to 9.4 for the relevant parts.
Author: Alexander Lakhin
Discussion: https://postgr.es/m/bf95233a-9943-b341-e2ff-a860c28af481@gmail.com
Backpatch-through: 9.4
This includes a couple of changes around the new behavior of pg_rewind
which enforces recovery to happen once on a cluster not shut down
cleanly:
- Some comments and documentation improvements.
- Shutdown the cluster to rewind with immediate mode in all the tests,
this allows to check after the forced recovery behavior which is wanted
as new default.
- Use -F for the forced recovery step, so as postgres does not use
fsync. This was useless as a final sync is done once the tool is done.
Author: Michael Paquier
Reviewed-by: Alexey Kondratov
Discussion: https://postgr.es/m/20191004083721.GA1829@paquier.xyz
These new options allow users to partition the pgbench_accounts table by
specifying the number of partitions and partitioning method. The values
allowed for partitioning method are range and hash.
This feature allows users to measure the overhead of partitioning if any.
Author: Fabien COELHO
Reviewed-by: Amit Kapila, Amit Langote, Dilip Kumar, Asif Rehman, and
Alvaro Herrera
Discussion: https://postgr.es/m/alpine.DEB.2.21.1907230826190.7008@lancre
The behavior described in the PREPARE man page applies only for the
default plan_cache_mode setting, so explain that properly. Rewrite
some of the text while I'm here. Per suggestion from Bruce.
Discussion: https://postgr.es/m/20190930155505.GA21095@momjian.us
This commit adds a mention that the order of columns specified during
multi-column most-common-value statistics is insignificant, and tries to
simplify examples.
Discussion: https://postgr.es/m/20190828162238.GA8360@momjian.us
Backpatch-through: 12
If we don't do this, the rewind fails if the server wasn't cleanly shut
down, which seems unhelpful serving no purpose.
Also provide a new option --no-ensure-shutdown to suppress this
behavior, for alleged advanced usage that prefers to avoid the crash
recovery.
Authors: Paul Guo, Jimmy Yih, Ashwin Agrawal
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/CAEET0ZEffUkXc48pg2iqARQgGRYDiiVxDu+yYek_bTwJF+q=Uw@mail.gmail.com
When building statistics, we need to decide how many rows to sample and
how accurate the resulting statistics should be. Until now, it was not
possible to explicitly define statistics target for extended statistics
objects, the value was always computed from the per-attribute targets
with a fallback to the system-wide default statistics target.
That's a bit inconvenient, as it ties together the statistics target set
for per-column and extended statistics. In some cases it may be useful
to require larger sample / higher accuracy for extended statics (or the
other way around), but with this approach that's not possible.
So this commit introduces a new command, allowing to specify statistics
target for individual extended statistics objects, overriding the value
derived from per-attribute targets (and the system default).
ALTER STATISTICS stat_name SET STATISTICS target_value;
When determining statistics target for an extended statistics object we
first look at this explicitly set value. When this value is -1, we fall
back to the old formula, looking at the per-attribute targets first and
then the system default. This means the behavior is backwards compatible
with older PostgreSQL releases.
Author: Tomas Vondra
Discussion: https://postgr.es/m/20190618213357.vli3i23vpkset2xd@development
Reviewed-by: Kirk Jamison, Dean Rasheed
When using COMMIT AND CHAIN or ROLLBACK AND CHAIN not in an explicit
transaction block, the previous implementation would leave a
transaction block active in the ROLLBACK case but not the COMMIT case.
To fix for now, error out when using these commands not in an explicit
transaction block. This restriction could be lifted if a sensible
definition and implementation is found.
Bug: #15977
Author: fn ln <emuser20140816@gmail.com>
Reviewed-by: Fabien COELHO <coelho@cri.ensmp.fr>
Clarify in the help output and documentation that -n, -t etc. take a
"pattern" rather than a "schema" or "table" etc. This was especially
confusing now that the new pg_dumpall --exclude-database option was
documented with "pattern" and the others not, even though they all
behave the same.
Discussion: https://www.postgresql.org/message-id/flat/b85f3fa1-b350-38d1-1893-4f7911bd7310%402ndquadrant.com
Previously, async.c got rid of duplicate notifications by scanning
the list of pending events to compare each one to the proposed new
event. This works okay for very small numbers of distinct events,
but degrades as O(N^2) for many events. We can improve matters by
using a hash table to probe for duplicates. So as not to add a
lot of overhead for the simple cases that the code did handle well
before, create the hash table only once a (sub)transaction has
queued more than 16 distinct notify events.
A downside is that we now have to do per-event work to propagate
a successful subtransaction's notify events up to its parent.
(But this isn't significant unless the subtransaction had many
events, in which case the O(N^2) behavior would have been in
play already, so we still come out ahead.)
We can make some lemonade out of this lemon, though: since we must
examine each event anyway, it's now possible to de-duplicate events
fully, rather than skipping that for events merged up from
subtransactions. Hence, remove the old weasel wording in notify.sgml
about whether de-duplication happens or not, and adjust the test
case in async-notify.spec that exhibited the old behavior.
While at it, rearrange the definition of struct Notification to make
it more compact and require just one palloc per event, rather than
two or three. This saves space when there are a lot of events,
in fact more than enough to buy back the space needed for the hash
table.
Patch by me, based on discussions around a different patch
submitted by Filip Rembiałkowski.
Discussion: https://postgr.es/m/17822.1564186806@sss.pgh.pa.us
VACUUM's reference page had this text, but ANALYZE's didn't. That's
a clear oversight given that section 5.7 explicitly delegates the
responsibility to define permissions requirements to the individual
commands' man pages.
Per gripe from Isaac Morland. Back-patch to all supported branches.
Discussion: https://postgr.es/m/CAMsGm5fp3oBUs-2iRfii0iEO=fZuJALVyM2zJLhNTjG34gpAVQ@mail.gmail.com
When doing a schema-level or a database-level operation, a list of
relations to build is created which gets processed in parallel using
multiple connections, based on the recent refactoring for parallel slots
in src/bin/scripts/. System catalogs are processed first in a
serialized fashion to prevent deadlocks, followed by the rest done in
parallel.
This new option is not compatible with --system as reindexing system
catalogs in parallel can lead to deadlocks, and with --index as there is
no conflict handling for indexes rebuilt in parallel depending in the
same relation.
Author: Julien Rouhaud
Reviewed-by: Sergei Kornilov, Michael Paquier
Discussion: https://postgr.es/m/CAOBaU_YrnH_Jqo46NhaJ7uRBiWWEcS40VNRQxgFbqYo9kApUsg@mail.gmail.com