This adds to database objects the same version tracking that collation
objects have. There is a new pg_database column datcollversion that
stores the version, a new function
pg_database_collation_actual_version() to get the version from the
operating system, and a new subcommand ALTER DATABASE ... REFRESH
COLLATION VERSION.
This was not originally added together with pg_collation.collversion,
since originally version tracking was only supported for ICU, and ICU
on a database-level is not currently supported. But we now have
version tracking for glibc (since PG13), FreeBSD (since PG14), and
Windows (since PG13), so this is useful to have now.
Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/f0ff3190-29a3-5b39-a179-fa32eee57db6%40enterprisedb.com
Currently, during UPDATE, the unchanged replica identity key attributes
are not logged separately because they are getting logged as part of the
new tuple. But if they are stored externally then the untoasted values are
not getting logged as part of the new tuple and logical replication won't
be able to replicate such UPDATEs. So we need to log such attributes as
part of the old_key_tuple during UPDATE.
Reported-by: Haiying Tang
Author: Dilip Kumar and Amit Kapila
Reviewed-by: Alvaro Herrera, Haiying Tang, Andres Freund
Backpatch-through: 10
Discussion: https://postgr.es/m/OS0PR01MB611342D0A92D4F4BF26C0F47FB229@OS0PR01MB6113.jpnprd01.prod.outlook.com
LZ4 compression can now be performed on the client using
pg_basebackup -Ft --compress client-lz4, and LZ4 decompression of
a backup compressed on the server can be performed on the client
using pg_basebackup -Fp --compress server-lz4.
Dipesh Pandit, reviewed and tested by Jeevan Ladhe and Tushar Ahuja,
with a few corrections - and some documentation - by me.
Discussion: http://postgr.es/m/CAN1g5_FeDmiA9D8wdG2W6Lkq5CpubxOAqTmd2et9hsinTJtsMQ@mail.gmail.com
LZ4 compression can be a lot faster than gzip compression, so users
may prefer it even if the compression ratio is not as good. We will
want pg_basebackup to support LZ4 compression and decompression on the
client side as well, and there is a pending patch for that, but it's
by a different author, so I am committing this part separately for
that reason.
Jeevan Ladhe, reviewed by Tushar Ahuja and by me.
Discussion: http://postgr.es/m/CANm22Cg9cArXEaYgHVZhCnzPLfqXCZLAzjwTq7Fc0quXRPfbxA@mail.gmail.com
Historically, the location of any files generated by pg_upgrade, as of
the per-database logs and internal dumps, has been the current working
directory, leaving all those files behind when using --retain or on a
failure.
Putting all those contents in a targeted subdirectory makes the whole
easier to debug, and simplifies the code in charge of cleaning up the
logs. Note that another reason is that this facilitates the move of
pg_upgrade to TAP with a fixed location for all the logs to grab if the
test fails repeatedly.
Initially, we thought about being able to specify the output directory
with a new option, but we have settled on using a subdirectory located
at the root of the new cluster's data folder, "pg_upgrade_output.d",
instead, as at the end the new data directory is the location of all the
data generated by pg_upgrade. There is a take with group permissions
here though: if the new data folder has been initialized with this
option, we need to create all the files and paths with the correct
permissions or a base backup taken after a pg_upgrade --retain would
fail, meaning that GetDataDirectoryCreatePerm() has to be called before
creating the log paths, before a couple of sanity checks on the clusters
and before getting the socket directory for the cluster's host settings.
The idea of the new location is based on a suggestion from Peter
Eisentraut.
Also thanks to Andrew Dunstan, Peter Eisentraut, Daniel Gustafsson, Tom
Lane and Bruce Momjian for the discussion (in alphabetical order).
Author: Justin Pryzby
Discussion: https://postgr.es/m/20211212025017.GN17618@telsasoft.com
A very well-informed user might deduce this from what we said already,
but I'd bet against it. Lay it out explicitly.
While here, rewrite the comment about tuple routing to be more
intelligible to an average SQL user.
Per bug #17395 from Alexander Lakhin. Back-patch to v11. (The text
in this area is different in v10 and I'm not sufficiently excited
about this point to adapt the patch.)
Discussion: https://postgr.es/m/17395-8c326292078d1a57@postgresql.org
Running a shell command for each file to be archived has a lot of
overhead and may not offer as much error checking as you want, or the
exact semantics that you want. So, offer the option to call a loadable
module for each file to be archived, rather than running a shell command.
Also, add a 'basic_archive' contrib module as an example implementation
that archives to a local directory.
Nathan Bossart, with a little bit of kibitzing by me.
Discussion: http://postgr.es/m/20220202224433.GA1036711@nathanxps13
The SQL standard has been ambiguous about whether null values in
unique constraints should be considered equal or not. Different
implementations have different behaviors. In the SQL:202x draft, this
has been formalized by making this implementation-defined and adding
an option on unique constraint definitions UNIQUE [ NULLS [NOT]
DISTINCT ] to choose a behavior explicitly.
This patch adds this option to PostgreSQL. The default behavior
remains UNIQUE NULLS DISTINCT. Making this happen in the btree code
is pretty easy; most of the patch is just to carry the flag around to
all the places that need it.
The CREATE UNIQUE INDEX syntax extension is not from the standard,
it's my own invention.
I named all the internal flags, catalog columns, etc. in the negative
("nulls not distinct") so that the default PostgreSQL behavior is the
default if the flag is false.
Reviewed-by: Maxim Orlov <orlovmg@gmail.com>
Reviewed-by: Pavel Borisov <pashkin.elfe@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/84e5ee1b-387e-9a54-c326-9082674bde78@enterprisedb.com
This patch improves tab completion's ability to deal with
valid variant spellings of SQL identifiers. Notably:
* Unquoted upper-case identifiers are now downcased as the backend
would do, allowing them to be completed correctly.
* Tab completion can now match identifiers that are quoted even
though they don't need to be; for example "f<TAB> now completes
to "foo" if that's the only available name. Previously, only
names that require quotes would be offered.
* Schema-qualified identifiers are now supported where SQL syntax
allows it; many lesser-used completion rules neglected this.
* Completion operations that refer back to some previously-typed
name (for example, to complete names of columns belonging to a
previously-mentioned table) now allow variant spellings of the
previous name too.
In addition, performance of tab completion queries has been
improved for databases containing many objects, although
you'd only be likely to notice with a heavily-loaded server.
Authors of future tab-completion patches should note that this
commit changes many details about how tab completion queries
must be written:
* Tab completion queries now deal in raw object names; do not
use quote_ident().
* The name-matching restriction in a query must now be written
as "outputcol LIKE '%s'", not "substring(outputcol,1,%d)='%s'".
* The SchemaQuery mechanism has been extended so that it can
handle queries that refer back to a previous name. Most completion
queries that do that should be converted to SchemaQuery form.
Only consider using a literal query if the previous name can
never be schema-qualified. Don't use a literal query if the
name-to-be-completed can validly be schema-qualified, either.
* Use set_completion_reference() to specify which word is the previous
name to consider, for either a SchemaQuery or a literal query.
* If you want to offer some keywords in addition to a query result
(for example, offer COLUMN in addition to column names after
"ALTER TABLE t RENAME"), do not use the old hack of tacking the
keywords on with UNION. Instead use the new QUERY_PLUS macros
to write such keywords separately from the query proper. The
"addon" macro arguments that used to be used for this purpose
are gone.
* If your query returns something that's not a SQL identifier
(such as an attribute number or enum label), use the new
QUERY_VERBATIM macros to prevent the result from incorrectly
getting double-quoted. You may still need to use quote_literal
in such a query, too.
Tom Lane and Haiying Tang
Discussion: https://postgr.es/m/a63cbd45e3884cf9b3961c2a6a95dcb7@G08CNEXMBPEKD05.g08.fujitsu.local
I had made it depend on superuser, but that seems clearly inferior.
Also document the permissions requirement in the straming replication
protocol section of the documentation, rather than only in the
section having to do with pg_basebackup.
Idea and patch from Dagfinn Ilmari Mannsåker.
Discussion: http://postgr.es/m/87bkzw160u.fsf@wibble.ilmari.org
If you have a low-bandwidth connection between the client and the
server, it's reasonable to want to compress on the server side but
then decompress and extract the backup on the client side. This
commit allows you do to do just that.
Dipesh Pandit, with minor and mostly cosmetic changes by me.
Discussion: http://postgr.es/m/CAN1g5_HiSh8ajUMd4ePtGyCXo89iKZTzaNyzP_qv1eJbi4YHXA@mail.gmail.com
The original text on refreshing collation versions was written
specifically for ICU, and then information for other providers was
incrementally tacked on at the end. Reword this to be a bit more
general and less reflective of how it was added.
Commit 0ad8032910 failed to update
the pg_basebackup documentation to mention that "client-" or
"server-" can now be prepended to the compression method name. Fix
it there, and also in the --help output that you get from running
the binary.
Also in the documentation, there's an old issue that the arguments to
--checkpoint shouldn't be marked as parameters, because "fast" and
"spread" are literal strings. Fix that too.
Dagfinn Ilmari Mannsåker, partly as per a report from
Shinoda Noriyoshi.
Discussion: http://postgr.es/m/PH7PR84MB1885C1CF433057807551172BEE5F9@PH7PR84MB1885.NAMPRD84.PROD.OUTLOOK.COM
This fixes a set of issues that have accumulated over the past months
(or years) in various code areas. Most fixes are related to some recent
additions, as of the development of v15.
Author: Justin Pryzby
Discussion: https://postgr.es/m/20220124030001.GQ23027@telsasoft.com
pg_basebackup's --compression option now lets you write either
"client-gzip" or "server-gzip" instead of just "gzip" to specify
where the compression should be performed. If you write simply
"gzip" it's taken to mean "client-gzip" unless you also use
--target, in which case it is interpreted to mean "server-gzip",
because that's the only thing that makes any sense in that case.
To make this work, the BASE_BACKUP command now takes new
COMPRESSION and COMPRESSION_LEVEL options.
At present, pg_basebackup cannot decompress .gz files, so
server-side compression will cause a failure if (1) -Ft is not
used or (2) -R is used or (3) -D- is used without --no-manifest.
Along the way, I removed the information message added by commit
5c649fe153 which occurred if you
specified no compression level and told you that the default level
had been used instead. That seemed like more output than most
people would want.
Also along the way, this adds a check to the server for
unrecognized base backup options. This repairs a bug introduced
by commit 0ba281cb4b.
This commit also adds some new test cases for pg_verifybackup.
They take a server-side backup with and without compression, and
then extract the backup if we have the OS facilities available
to do so, and then run pg_verifybackup on the extracted
directory. That is a good test of the functionality added by
this commit and also improves test coverage for the backup target
patch (commit 3500ccc39b) and for
pg_verifybackup itself.
Patch by me, with a bug fix by Jeevan Ladhe. The patch set of which
this is a part has also had review and/or testing from Tushar Ahuja,
Suraj Kharage, Dipesh Pandit, and Mark Dilger.
Discussion: http://postgr.es/m/CA+Tgmoa-ST7fMLsVJduOB7Eub=2WjfpHS+QxHVEpUoinf4bOSg@mail.gmail.com
Commit 9a974cbcba arranged to preserve
relfilenodes and tablespace OIDs. For similar reasons, also arrange
to preserve database OIDs.
One problem is that, up until now, the OIDs assigned to the template0
and postgres databases have not been fixed. This could be a problem
when upgrading, because pg_upgrade might try to migrate a database
from the old cluster to the new cluster while keeping the OID and find
a different database with that OID, resulting in a failure. If it finds
a database with the same name and the same OID that's OK: it will be
dropped and recreated. But the same OID and a different name is a
problem.
To prevent that, fix the OIDs for postgres and template0 to specific
values less than 16384. To avoid running afoul of this rule, these
values should not be changed in future releases. It's not a problem
that these OIDs aren't fixed in existing releases, because the OIDs
that we're assigning here weren't used for either of these databases
in any previous release. Thus, there's no chance that an upgrade of
a cluster from any previous release will collide with the OIDs we're
assigning here. And going forward, the OIDs will always be fixed, so
the only potential collision is with a system database having the
same name and the same OID, which is OK.
This patch lets users assign a specific OID to a database as well,
provided however that it can't be less than 16384. I (rhaas) thought
it might be better not to expose this capability to users, but the
consensus was otherwise, so the syntax is documented. Letting users
assign OIDs below 16384 would not be OK, though, because a
user-created database with a low-numbered OID might collide with a
system-created database in a future release. We therefore prohibit
that.
Shruthi KC, based on an earlier patch from Antonin Houska, reviewed
and with some adjustments by me.
Discussion: http://postgr.es/m/CA+TgmoYgTwYcUmB=e8+hRHOFA0kkS6Kde85+UNdon6q7bt1niQ@mail.gmail.com
Discussion: http://postgr.es/m/CAASxf_Mnwm1Dh2vd5FAhVX6S1nwNSZUB1z12VddYtM++H2+p7w@mail.gmail.com
The option --compress is extended to accept a compression method and an
optional compression level, as of the grammar METHOD[:LEVEL]. The
methods currently support are "none" and "gzip", for client-side
compression. Any of those methods use only an integer value for the
compression level, but any method implemented in the future could use
more specific keywords if necessary.
This commit keeps the logic backward-compatible. Hence, the following
compatibility rules apply for the new format of the option --compress:
* -z/--gzip is a synonym of --compress=gzip.
* --compress=NUM implies:
** --compress=none if NUM = 0.
** --compress=gzip:NUM if NUM > 0.
Note that there are also plans to extend more this grammar with
server-side compression.
Reviewed-by: Robert Haas, Magnus Hagander, Álvaro Herrera, David
G. Johnston, Georgios Kokolatos
Discussion: https://postgr.es/m/Yb3GEgWwcu4wZDuA@paquier.xyz
pg_basebackup now has a --target=TARGET[:DETAIL] option. If specfied,
it is sent to the server as the value of the TARGET option to the
BASE_BACKUP command. If DETAIL is included, it is sent as the value of
the new TARGET_DETAIL option to the BASE_BACKUP command. If the
target is anything other than 'client', pg_basebackup assumes that it
will now be the server's job to write the backup in a location somehow
defined by the target, and that it therefore needs to write nothing
locally. However, the server will still send messages to the client
for progress reporting purposes.
On the server side, we now support two additional types of backup
targets. There is a 'blackhole' target, which just throws away the
backup data without doing anything at all with it. Naturally, this
should only be used for testing and debugging purposes, since you will
not actually have a backup when it finishes running. More usefully,
there is also a 'server' target, so you can now use something like
'pg_basebackup -Xnone -t server:/SOME/PATH' to write a backup to some
location on the server. We can extend this to more types of targets
in the future, and might even want to create an extensibility
mechanism for adding new target types.
Since WAL fetching is handled with separate client-side logic, it's
not part of this mechanism; thus, backups with non-default targets
must use -Xnone or -Xfetch.
Patch by me, with a bug fix by Jeevan Ladhe. The patch set of which
this is a part has also had review and/or testing from Tushar Ahuja,
Suraj Kharage, Dipesh Pandit, and Mark Dilger.
Discussion: http://postgr.es/m/CA+TgmoaYZbz0=Yk797aOJwkGJC-LK3iXn+wzzMx7KdwNpZhS5g@mail.gmail.com
The logic is similar to default_tablespace in some ways, so as no SET
queries on default_table_access_method are generated before dumping or
restoring an object (table or materialized view support table AMs) when
specifying this new option.
This option is useful to enforce the use of a default access method even
if some tables included in a dump use an AM different than the system's
default.
There are already two cases in the TAP tests of pg_dump with a table and
a materialized view that use a non-default table AM, and these are
extended that the new option does not generate SET clauses on
default_table_access_method.
Author: Justin Pryzby
Discussion: https://postgr.es/m/20211207153930.GR17618@telsasoft.com
The ACL is printed when you add + to the command, similarly to
various other psql backslash commands.
Along the way, move the code for this into describe.c,
where it is a better fit (and can share some code).
Pavel Luzanov, reviewed by Georgios Kokolatos
Discussion: https://postgr.es/m/6d722115-6297-bc53-bb7f-5f150e765299@postgrespro.ru
catalog/pg_class.h was stating that REPLICA_IDENTITY_INDEX with a
dropped index is equivalent to REPLICA_IDENTITY_DEFAULT. The code tells
a different story, as it is equivalent to REPLICA_IDENTITY_NOTHING.
The behavior exists since the introduction of replica identities, and
fe7fd4e even added tests for this case but I somewhat forgot to fix this
comment.
While on it, this commit reorganizes the documentation about replica
identities on the ALTER TABLE page, and a note is added about the case
of dropped indexes with REPLICA_IDENTITY_INDEX.
Author: Michael Paquier, Wei Wang
Reviewed-by: Euler Taveira
Discussion: https://postgr.es/m/OS3PR01MB6275464AD0A681A0793F56879E759@OS3PR01MB6275.jpnprd01.prod.outlook.com
Backpatch-through: 10
\getenv fetches the value of an environment variable into a psql
variable. This is the inverse of the \setenv command that was added
over ten years ago. We'd not seen a compelling use-case for \getenv
at the time, but upcoming regression test refactoring provides a
sufficient reason to add it now.
Discussion: https://postgr.es/m/1655733.1639871614@sss.pgh.pa.us
This is an option consistent with what the other tools of src/bin/
(pg_checksums, pg_dump, pg_rewind and pg_basebackup) provide which is
useful for leveraging the I/O effort when testing things. This is not
to be used in a production environment.
All the regression tests of pg_upgrade are updated to use this new
option. This happens to cut at most a couple of seconds in environments
constrained on I/O, by avoiding a flush of data folder for the new
cluster upgraded.
Author: Michael Paquier
Reviewed-by: Peter Eisentraut
Discussion: https://postgr.es/m/YbrhzuBmBxS/DkfX@paquier.xyz
Server versions for which there was a plausible reason to
use this switch are all out of support now. Leaving it
around would accomplish little except to let careless DBAs
shoot themselves in the foot.
Discussion: https://postgr.es/m/556122.1639520324@sss.pgh.pa.us
Per discussion, we'll limit support for old servers to those branches
that can still be built easily on modern platforms, which as of now
is 9.2 and up.
Discussion: https://postgr.es/m/2923349.1634942313@sss.pgh.pa.us
Per discussion, we'll limit support for old servers to those branches
that can still be built easily on modern platforms, which as of now
is 9.2 and up. Remove over a thousand lines of code dedicated to
dumping from older server versions. (As in previous changes of
this sort, we aren't removing pg_restore's ability to read older
archive files ... though it's fair to wonder how that might be
tested nowadays.) This cleans up some dead code left behind by
commit 989596152.
Discussion: https://postgr.es/m/2923349.1634942313@sss.pgh.pa.us
Extend the foreign key ON DELETE actions SET NULL and SET DEFAULT by
allowing the specification of a column list, like
CREATE TABLE posts (
...
FOREIGN KEY (tenant_id, author_id) REFERENCES users ON DELETE SET NULL (author_id)
);
If a column list is specified, only those columns are set to
null/default, instead of all the columns in the foreign-key
constraint.
This is useful for multitenant or sharded schemas, where the tenant or
shard ID is included in the primary key of all tables but shouldn't be
set to null.
Author: Paul Martinez <paulmtz@google.com>
Discussion: https://www.postgresql.org/message-id/flat/CACqFVBZQyMYJV=njbSMxf+rbDHpx=W=B7AEaMKn8dWn9OZJY7w@mail.gmail.com
Previously, pg_waldump would not display its statistics summary if it
got interrupted by SIGINT (or say a simple Ctrl+C). It gains with this
commit a signal handler for SIGINT, trapping the signal to exit at the
earliest convenience to allow a display of the stats summary before
exiting. This makes the reports more interactive, similarly to strace
-c.
This new behavior makes the combination of the options --stats and
--follow much more useful, so as the user will get a report for any
invocation of pg_waldump in such a case. Information about the LSN
range of the stats computed is added as a header to the report
displayed.
This implementation comes from a suggestion by Álvaro Herrera and
myself, following a complaint by the author of this patch about --stats
and --follow not being useful together originally.
As documented, this is not supported on Windows, though its support
would be possible by catching the terminal events associated to Ctrl+C,
for example (this may require a more centralized implementation, as
other tools could benefit from a common API).
Author: Bharath Rupireddy
Discussion: https://postgr.es/m/CALj2ACUUx3PcK2z9h0_m7vehreZAUbcmOky9WSEpe8TofhV=PQ@mail.gmail.com
ATTACHing a table into a partition tree whose root is published using a
publication with publish_via_partition_root set to true does not result in
the table's existing contents being replicated. This happens because
subscriber doesn't consider replicating the newly attached partition as
the root table is already in a 'ready' state.
This behavior was introduced in PG13 (83fd4532a7) where we allowed to
publish partition changes via ancestors.
We can consider fixing this limitation in the future.
Author: Amit Langote
Reviewed-by: Hou Zhijie, Amit Kapila
Backpatch-through: 13
Discussion: https://postgr.es/m/OS0PR01MB5716E97F00732B52DC2BBC2594989@OS0PR01MB5716.jpnprd01.prod.outlook.com
Remove the confusing use of ORDER BY in an example materialized
view. It adds nothing to the example, but might encourage
people to follow bad practice. Clarify REFRESH MATERIALIZED
VIEW's note about whether view ordering is retained (it isn't).
Maciek Sakrejda
Discussion: https://postgr.es/m/CAOtHd0D-OvrUU0C=4hX28p4BaSE1XL78BAQ0VcDaLLt8tdUzsg@mail.gmail.com
pg_receivewal gains a new option, --compression-method=lz4, available
when the code is compiled with --with-lz4. Similarly to gzip, this
gives the possibility to compress archived WAL segments with LZ4. This
option is not compatible with --compress.
The implementation uses LZ4 frames, and is compatible with simple lz4
commands. Like gzip, using --synchronous ensures that any data will be
flushed to disk within the current .partial segment, so as it is
possible to retrieve as much WAL data as possible even from a
non-completed segment (this requires completing the partial file with
zeros up to the WAL segment size supported by the backend after
decompression, but this is the same as gzip).
The calculation of the streaming start LSN is able to transparently find
and check LZ4-compressed segments. Contrary to gzip where the
uncompressed size is directly stored in the object read, the LZ4 chunk
protocol does not store the uncompressed data by default. There is
contentSize that can be used with LZ4 frames by that would not help if
using an archive that includes segments compressed with the defaults of
a "lz4" command, where this is not stored. So, this commit has taken
the most extensible approach by decompressing the already-archived
segment to check its uncompressed size, through a blank output buffer in
chunks of 64kB (no actual performance difference noticed with 8kB, 16kB
or 32kB, and the operation in itself is actually fast).
Tests have been added to verify the creation and correctness of the
generated LZ4 files. The latter is achieved by the use of command
"lz4", if found in the environment.
The tar-based WAL method in walmethods.c, used now only by
pg_basebackup, does not know yet about LZ4. Its code could be extended
for this purpose.
Author: Georgios Kokolatos
Reviewed-by: Michael Paquier, Jian Guo, Magnus Hagander, Dilip Kumar
Discussion: https://postgr.es/m/ZCm1J5vfyQ2E6dYvXz8si39HQ2gwxSZ3IpYaVgYa3lUwY88SLapx9EEnOf5uEwrddhx2twG7zYKjVeuP5MwZXCNPybtsGouDsAD1o2L_I5E=@pm.me
The option name was incorrect in one of the error messages, and the
short option 'I' was used in the code but we did not intend things to be
this way. While on it, fix the documentation to refer to a "method",
and not a "level.
Oversights in commit d62bcc8, that I have detected after more review of
the LZ4 patch for pg_receivewal.
pg_receivewal includes since cada1af the option --compress, to allow the
compression of WAL segments using gzip, with a value of 0 (the default)
meaning that no compression can be used.
This commit introduces a new option, called --compression-method, able
to use as values "none", the default, and "gzip", to make things more
extensible. The case of --compress=0 becomes fuzzy with this option
layer, so we have made the choice to make pg_receivewal return an error
when using "none" and a non-zero compression level, meaning that the
authorized values of --compress are now [1,9] instead of [0,9]. Not
specifying --compress with "gzip" as compression method makes
pg_receivewal use the default of zlib instead (Z_DEFAULT_COMPRESSION).
The code in charge of finding the streaming start LSN when scanning the
existing archives is refactored and made more extensible. While on it,
rename "compression" to "compression_level" in walmethods.c, to reduce
the confusion with the introduction of the compression method, even if
the tar method used by pg_basebackup does not rely on the compression
method (yet, at least), but just on the compression level (this area
could be improved more, actually).
This is in preparation for an upcoming patch that adds LZ4 support to
pg_receivewal.
Author: Georgios Kokolatos
Reviewed-by: Michael Paquier, Jian Guo, Magnus Hagander, Dilip Kumar,
Robert Haas
Discussion: https://postgr.es/m/ZCm1J5vfyQ2E6dYvXz8si39HQ2gwxSZ3IpYaVgYa3lUwY88SLapx9EEnOf5uEwrddhx2twG7zYKjVeuP5MwZXCNPybtsGouDsAD1o2L_I5E=@pm.me
Use verbiage like "The name of the table must be distinct from the name
of any other relation (table, sequence, index, view, materialized view,
or foreign table) in the same schema." in the reference pages for all
those object types. The main change here is to mention materialized
views explicitly; although a couple of these pages failed to say
anything at all about name conflicts.
Per suggestion from Daniel Westermann.
Discussion: https://postgr.es/m/ZR0P278MB0920D0946509233459AF0DEFD2889@ZR0P278MB0920.CHEP278.PROD.OUTLOOK.COM
Previously failures of initial connection and logfile open caused pgbench
to proceed the benchmarking, report the incomplete results and exit with
status 2. It didn't make sense to proceed the benchmarking even when
pgbench could not start as prescribed.
This commit improves pgbench so that early errors that occur when
starting benchmark such as those failures should make pgbench exit
immediately with status 1.
Author: Yugo Nagata
Reviewed-by: Fabien COELHO, Kyotaro Horiguchi, Fujii Masao
Discussion: https://postgr.es/m/TYCPR01MB5870057375ACA8A73099C649F5349@TYCPR01MB5870.jpnprd01.prod.outlook.com
A new option "FOR ALL TABLES IN SCHEMA" in Create/Alter Publication allows
one or more schemas to be specified, whose tables are selected by the
publisher for sending the data to the subscriber.
The new syntax allows specifying both the tables and schemas. For example:
CREATE PUBLICATION pub1 FOR TABLE t1,t2,t3, ALL TABLES IN SCHEMA s1,s2;
OR
ALTER PUBLICATION pub1 ADD TABLE t1,t2,t3, ALL TABLES IN SCHEMA s1,s2;
A new system table "pg_publication_namespace" has been added, to maintain
the schemas that the user wants to publish through the publication.
Modified the output plugin (pgoutput) to publish the changes if the
relation is part of schema publication.
Updates pg_dump to identify and dump schema publications. Updates the \d
family of commands to display schema publications and \dRp+ variant will
now display associated schemas if any.
Author: Vignesh C, Hou Zhijie, Amit Kapila
Syntax-Suggested-by: Tom Lane, Alvaro Herrera
Reviewed-by: Greg Nancarrow, Masahiko Sawada, Hou Zhijie, Amit Kapila, Haiying Tang, Ajin Cherian, Rahila Syed, Bharath Rupireddy, Mark Dilger
Tested-by: Haiying Tang
Discussion: https://www.postgresql.org/message-id/CALDaNm0OANxuJ6RXqwZsM1MSY4s19nuH3734j4a72etDwvBETQ@mail.gmail.com
Prior to this patch, when running pg_receivewal, the streaming start
point would be the current location of the archives if anything is
found in the local directory where WAL segments are written, and
pg_receivewal would fall back to the current WAL flush location if there
are no archives, as of the result of an IDENTIFY_SYSTEM command.
If for some reason the WAL files from pg_receivewal were moved, it is
better to try a restart where we left at, which is the replication
slot's restart_lsn instead of skipping right to the current flush
location, to avoid holes in the WAL backed up. This commit changes
pg_receivewal to use the following sequence of methods to determine the
starting streaming LSN:
- Scan the local archives.
- Use the slot's restart_lsn, if supported by the backend and if a slot
is defined.
- Fallback to the current flush LSN as reported by IDENTIFY_SYSTEM.
To keep compatibility with older server versions, we only attempt to use
READ_REPLICATION_SLOT if the backend version is at least 15, and
fallback to the older behavior of streaming from the current flush
LSN if the command is not supported.
Some TAP tests are added to cover this feature.
Author: Ronan Dunklau
Reviewed-by: Kyotaro Horiguchi, Michael Paquier, Bharath Rupireddy
Discussion: https://postgr.es/m/18708360.4lzOvYHigE@aivenronan
The documentation was imprecise about the starting LSN used for WAL
streaming if nothing can be found in the local archive directory
defined with the pg_receivewal command, so be more talkative on this
matter.
Extracted from a larger patch by the same author.
Author: Ronan Dunklau, Michael Paquier
Discussion: https://postgr.es/m/18708360.4lzOvYHigE@aivenronan
Backpatch-through: 10