symlink() and readlink() are in SUSv2 and all targeted Unix systems have
them. We have partial emulation on Windows. Code that raised runtime
errors on systems without it has been dead for years, so we can remove
that and also references to such systems in the documentation.
Define HAVE_READLINK and HAVE_SYMLINK macros on Unix. Our Windows
replacement functions based on junction points can't be used for
relative paths or for non-directories, so the macros can be used to
check for full symlink support. The places that deal with tablespaces
can just use symlink functions without checking the macros. (If they
did check the macros, they'd need to provide an #else branch with a
runtime or compile time error, and it'd be dead code.)
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CA+hUKGJ3LHeP9w5Fgzdr4G8AnEtJ=z=p6hGDEm4qYGEUX5B6fQ@mail.gmail.com
These two new options can be used to either process all tables in
specific schemas or to skip processing all tables in specific
schemas. This change also refactors the handling of invalid
combinations of command-line options to a new helper function.
Author: Gilles Darold
Reviewed-by: Justin Pryzby, Nathan Bossart and Michael Paquier.
Discussion: https://postgr.es/m/929fbf3c-24b8-d454-811f-1d5898ab3e91%40migops.com
This addresses a couple of bugs in the REINDEX grammar, introduced by
83011ce:
- A name was never specified for DATABASE/SYSTEM, even if the query
included one. This caused such REINDEX queries to always work with any
object name, but we should complain if the object name specified does
not match the name of the database we are connected to. A test is added
for this case in the main regression test suite, provided by Álvaro.
- REINDEX SYSTEM CONCURRENTLY [name] was getting rejected in the
parser. Concurrent rebuilds are not supported for catalogs but the
error provided at execution time is more helpful for the user, and
allowing this flavor results in a simplification of the parsing logic.
- REINDEX DATABASE CONCURRENTLY was rebuilding the index in a
non-concurrent way, as the option was not being appended correctly in
the list of DefElems in ReindexStmt (REINDEX (CONCURRENTLY) DATABASE was
working fine. A test is added in the TAP tests of reindexdb for this
case, where we already have a REINDEX DATABASE CONCURRENTLY query
running on a small-ish instance. This relies on the work done in
2cbc3c1 for SYSTEM, but here we check if the OIDs of the index relations
match or not after the concurrent rebuild. Note that in order to get
this part to work, I had to tweak the tests so as the index OID and
names are saved separately. This change not affect the reliability or
of the coverage of the existing tests.
While on it, I have implemented a tweak in the grammar to reduce the
parsing by one branch, simplifying things even more.
Author: Michael Paquier, Álvaro Herrera
Discussion: https://postgr.es/m/YttqI6O64wDxGn0K@paquier.xyz
The setting controls tha maximum length of the header line in expanded
format output. Possible settings are full, column, page, or an integer.
the default is full, the current behaviour, and in this case the header
line is the length of the widest line of output. column causes the
header to be truncated to the width of the first column, page causes it
to be truncated to the width of the terminal page, and an integer causes
it to be truncated to that value. If the full value is less than the
page or integer value no truncation occurs. If given without an argument
this option prints its current setting.
Platon Pronko, somewhat modified by me.
Discussion: https://postgr.es/m/f03d38a3-db96-a56e-d1bc-dbbc80bbde4d@gmail.com
We didn't explicitly say that random() uses a randomly-chosen seed
if you haven't called setseed(). Do so.
Also, remove ref/set.sgml's no-longer-accurate (and never very
relevant) statement that the seed value is multiplied by 2^31-1.
Back-patch to v12 where set.sgml's claim stopped being true.
The claim that we use a source of random bits as seed was debatable
before 4203842a1, too, so v12 seems like a good place to stop.
Per question from Carl Sopchak.
Discussion: https://postgr.es/m/f37bb937-9d99-08f0-4de7-80c91a3cfc2e@sopchak.me
This allows users to omit the statistics name in a CREATE STATISTICS
command, letting the system auto-generate a sensible, unique name,
putting the statistics object in the same schema as the table.
Simon Riggs, reviewed by Matthias van de Meent.
Discussion: https://postgr.es/m/CANbhV-FGD2d_C3zFTfT2aRfX_TaPSgOeKES58RLZx5XzQp5NhA@mail.gmail.com
This patch adds a new SUBSCRIPTION parameter "origin". It specifies
whether the subscription will request the publisher to only send changes
that don't have an origin or send changes regardless of origin. Setting it
to "none" means that the subscription will request the publisher to only
send changes that have no origin associated. Setting it to "any" means
that the publisher sends changes regardless of their origin. The default
is "any".
Usage:
CREATE SUBSCRIPTION sub1 CONNECTION 'dbname=postgres port=9999'
PUBLICATION pub1 WITH (origin = none);
This can be used to avoid loops (infinite replication of the same data)
among replication nodes.
This feature allows filtering only the replication data originating from
WAL but for initial sync (initial copy of table data) we don't have such a
facility as we can only distinguish the data based on origin from WAL. As
a follow-up patch, we are planning to forbid the initial sync if the
origin is specified as none and we notice that the publication tables were
also replicated from other publishers to avoid duplicate data or loops.
We forbid to allow creating origin with names 'none' and 'any' to avoid
confusion with the same name options.
Author: Vignesh C, Amit Kapila
Reviewed-By: Peter Smith, Amit Kapila, Dilip Kumar, Shi yu, Ashutosh Bapat, Hayato Kuroda
Discussion: https://postgr.es/m/CALDaNm0gwjY_4HFxvvty01BOT01q_fJLKQ3pWP9=9orqubhjcQ@mail.gmail.com
Upcoming custom Table Access Methods (TableAM) need benchmarking. Despite
pgbench doesn't have an explicit option for TableAM specification, one can
specify it using PGOPTION environmental variable. The present commit documents
this way to specify TableAM for pgbench.
Discussion: https://postgr.es/m/CAC77N6ih%3DLbhZQXV76grEsaVQkBL464Y2Foqq9o%3Df4UBfEOfEQ%40mail.gmail.com
Author: Michel Pelletier, Alexander Korotkov
Reviewed-by: Justin Pryzby, Mason Sharp, Michael Paquier
This allows aliases for sub-SELECTs and VALUES clauses in the FROM
clause to be omitted.
This is an extension of the SQL standard, supported by some other
database systems, and so eases the transition from such systems, as
well as removing the minor inconvenience caused by requiring these
aliases.
Patch by me, reviewed by Tom Lane.
Discussion: https://postgr.es/m/CAEZATCUCGCf82=hxd9N5n6xGHPyYpQnxW8HneeH+uP7yNALkWA@mail.gmail.com
Per discussion, this commit includes a couple of changes to these two
flavors of REINDEX:
* The grammar is changed to make the name of the object optional, hence
one can rebuild all the indexes of the wanted area by specifying only
"REINDEX DATABASE;" or "REINDEX SYSTEM;". Previously, the object name
was mandatory and had to match the name of the database on which the
command is issued.
* REINDEX DATABASE is changed to ignore catalogs, making this task only
possible with REINDEX SYSTEM. This is a historical change, but there
was no way to work only on the indexes of a database without touching
the catalogs. We have discussed more approaches here, like the addition
of an option to skip the catalogs without changing the original
behavior, but concluded that what we have here is for the best.
This builds on top of the TAP tests introduced in 5fb5b6c, showing the
change in behavior for REINDEX SYSTEM. reindexdb is updated so as we do
not issue an extra REINDEX SYSTEM when working on a database in the
non-concurrent case, something that was confusing when --concurrently
got introduced, so this simplifies the code.
Author: Simon Riggs
Reviewed-by: Ashutosh Bapat, Bernd Helmle, Álvaro Herrera, Cary Huang,
Michael Paquier
Discussion: https://postgr.es/m/CANbhV-H=NH6Om4-X6cRjDWfH_Mu1usqwkuYVp-hwdB_PSHWRfg@mail.gmail.com
The following options are added to createuser:
* --valid-until to generate a VALID UNTIL clause for the role created.
* --bypassrls/--no-bypassrls for BYPASSRLS/NOBYPASSRLS.
* -m/--member to make the new role a member of an existing role, with an
extra ROLE clause generated. The clause generated overlaps with
-g/--role, but per discussion this was the most popular choice as option
name.
* -a/--admin for the addition of an ADMIN clause.
These option names are chosen to be completely new, so as they do not
impact anybody relying on the existing option set. Tests are added for
the new options and extended a bit, while on it, to cover more patterns
where quotes are added to various elements of the query generated.
Author: Shinya Kato
Reviewed-by: Nathan Bossart, Daniel Gustafsson, Robert Haas, Kyotaro
Horiguchi, David G. Johnston, Przemysław Sztoch
Discussion: https://postgr.es/m/69a9851035cf0f0477bcc5d742b031a3@oss.nttdata.com
Now some foreign data wrappers support TRUNCATE command.
So it's useful to support TRUNCATE triggers on foreign tables for
audit logging or for preventing undesired truncation.
Author: Yugo Nagata
Reviewed-by: Fujii Masao, Ian Lawrence Barwick
Discussion: https://postgr.es/m/20220630193848.5b02e0d6076b86617a915682@sraoss.co.jp
We have had a working and tunable autovacuum
for at least a decade now, so remove the recommendation to
manually vacuum tables at least every night.
Autovacuum is now also triggered by INSERTs, so we can also
remove the recommendation to run VACUUM (ANALYZE) after lots
of INSERTs or DELETEs.
Instead, suggest using autovacuum by moving the respective
paragraph up to where the importance of VACUUM is emphasized.
Author: Laurenz Albe <laurenz.albe@cybertec.at>
Reviewed-By: Magnus Hagander, Peter Geoghegan
Discussion: https://postgr.es/m/6f5e3da98fec14640f389d7b84c3b413833697f4.camel@cybertec.at
This patch documents that the initial data synchronization (tablesync) for
logical replication does not take into account the publication 'publish'
parameter when copying the existing table data.
Author: Peter Smith
Reviewed-by: Shi yu, Euler Taveira, Robert Haas, Amit Kapila
Discussion: https://postgr.es/m/CAHut+PtbfALjFpS2MkrvQ+wWQKByP7CNh9RtFta-r=BHEU3S3w@mail.gmail.com
072132f0 used the attnum offset to access the raw_fields array when
checking that the attribute names of the header and of the relation
match, leading to incorrect results or even crashes if the attribute
numbers of a relation are changed, like on a dropped attribute. This
fixes the logic to use the correct attribute names for the header
matching requirements.
Also, this commit disallows HEADER MATCH in COPY TO as there is no
validation that can be done in this case.
The tests are expanded for HEADER MATCH with COPY FROM and dropped
columns, with cases where a relation has a dropped and re-added column,
as well as a reduced set of columns.
Author: Julien Rouhaud
Reviewed-by: Peter Eisentraut, Michael Paquier
Discussion: https://postgr.es/m/20220607154744.vvmitnqhyxrne5ms@jrouhaud
In addition, this moves the new paragraph in the MVCC page upwards, for
a more consistent flow; some minor markup mistakes, style issues and
typos are fixed too.
Per comments from Justin Pryzby.
Discussion: https://postgr.es/m/20220511163350.GL19626@telsasoft.com
This commit, in completion of 157f873, forces a ROLLBACK for
--single-transaction only when ON_ERROR_STOP is used when one of the
steps defined by -f/-c fails. Hence, COMMIT is always used when
ON_ERROR_STOP is not set, ignoring the status code of the last action
taken in the set of switches specified by -c/-f (previously ROLLBACK
would have been issued even without ON_ERROR_STOP if the last step
failed, while COMMIT was issued if a step in-between failed as long as
the last step succeeded, leading to more inconsistency).
While on it, this adds much more test coverage in this area when not
using ON_ERROR_STOP with multiple switch patterns involving -c and -f
for query files, single queries and slash commands.
The behavior of ON_ERROR_STOP is arguably a bug, but there was no much
support for a backpatch to force a ROLLBACK on a step failure, so this
change is done only on HEAD for now.
Per discussion with Tom Lane and Kyotaro Horiguchi.
Discussion: https://postgr.es/m/Yqbc8bAdwnP02na4@paquier.xyz
The previous wording was "the underlying data type's default collation
is used", which is wrong or at least misleading. The domain inherits
the base type's collation behavior, which if "default" actually can
mean that we use some non-default collation obtained from elsewhere.
Per complaint from Jian He.
Discussion: https://postgr.es/m/CACJufxHMR8_4WooDPjjvEdaxB2hQ5a49qthci8fpKP0MKemVRQ@mail.gmail.com
38bfae3 has moved the contents written to files by pg_upgrade under a
new directory called pg_upgrade_output.d/ located in the new cluster's
data folder, and it used a simple structure made of two subdirectories
leading to a fixed structure: log/ and dump/. This design has made
weaker pg_upgrade on repeated calls, as we could get failures when
creating one or more of those directories, while potentially losing the
logs of a previous run (logs are retained automatically on failure, and
cleaned up on success unless --retain is specified). So a user would
need to clean up pg_upgrade_output.d/ as an extra step for any repeated
calls of pg_upgrade. The most common scenario here is --check followed
by the actual upgrade, but one could see a failure when specifying an
incorrect input argument value. Removing entirely the logs would have
the disadvantage of removing all the past information, even if --retain
was specified at some past step.
This result is annoying for a lot of users and automated upgrade flows.
So, rather than requiring a manual removal of pg_upgrade_output.d/, this
redesigns the set of output directories in a more dynamic way, based on
a suggestion from Tom Lane and Daniel Gustafsson. pg_upgrade_output.d/
is still the base path, but a second directory level is added, mostly
named after an ISO-8601-formatted timestamp (in short human-readable,
with milliseconds appended to the name to avoid any conflicts). The
logs and dumps are saved within the same subdirectories as previously,
as of log/ and dump/, but these are located inside the subdirectory
named after the timestamp.
The logs of a given run are removed only after a successful run if
--retain is not used, and pg_upgrade_output.d/ is kept if there are any
logs from a previous run. Note that previously, pg_upgrade would have
kept the logs even after a successful --check but that was inconsistent
compared to the case without --check when using --retain. The code in
charge of the removal of the output directories is now refactored into a
single routine.
Two TAP tests are added with some --check commands (one failure case and
one success case), to look after the issue fixed here. Note that the
tests had to be tweaked a bit to fit with the new directory structure so
as it can find any logs generated on failure. This is still going to
require a change in the buildfarm client for the case where pg_upgrade
is tested without the TAP test, though, but I'll tackle that with a
separate patch where needed.
Reported-by: Tushar Ahuja
Author: Michael Paquier
Reviewed-by: Daniel Gustafsson, Justin Pryzby
Discussion: https://postgr.es/m/77e6ecaa-2785-97aa-f229-4b6e047cbd2b@enterprisedb.com
psql --single-transaction is able to handle multiple -c and -f switches
in a single transaction since d5563d7d, but this had the surprising
behavior of forcing a transaction COMMIT even if psql failed with an
error in the client (for example incorrect path given to \copy), which
would generate an error, but still commit any changes that were already
applied in the backend. This commit makes the behavior more consistent,
by enforcing a transaction ROLLBACK if any commands fail, both
client-side and backend-side, so as no changes are applied if one error
happens in any of them.
Some tests are added on HEAD to provide some coverage about all that.
Backend-side errors are unreliable as IPC::Run can complain on SIGPIPE
if psql quits before reading a query result, but that should work
properly in the case where any errors come from psql itself, which is
what the original report is about.
Reported-by: Christoph Berg
Author: Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/17504-76b68018e130415e@postgresql.org
Backpatch-through: 10
Currently, we simply combine the column lists when publishing tables on
multiple publications and that can sometimes lead to unexpected behavior.
Say, if a column is published in any row-filtered publication, then the
values for that column are sent to the subscriber even for rows that don't
match the row filter, as long as the row matches the row filter for any
other publication, even if that other publication doesn't include the
column.
The main purpose of introducing a column list is to have statically
different shapes on publisher and subscriber or hide sensitive column
data. In both cases, it doesn't seem to make sense to combine column
lists.
So, we disallow the cases where the column list is different for the same
table when combining publications. It can be later extended to combine the
column lists for selective cases where required.
Reported-by: Alvaro Herrera
Author: Hou Zhijie
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/202204251548.mudq7jbqnh7r@alvherre.pgsql
This reverts commit d9d076222f "VACUUM: ignore indexing operations
with CONCURRENTLY".
These changes caused indexes created with the CONCURRENTLY option to
miss heap tuples that were HOT-updated and HOT-pruned during the index
creation. Before these changes, HOT pruning would have been prevented
by the Xmin of the transaction creating the index, but because this
change was precisely to allow the Xmin to move forward ignoring that
backend, now other backends scanning the table can prune them. This is
not a problem for VACUUM (which requires a lock that conflicts with a
CREATE INDEX CONCURRENTLY operation), but HOT-prune can definitely
occur. In other words, Xmin advancement was sped up, but at the cost of
corrupting the resulting index.
Regrettably, this means that the new feature in PG14 that RIC/CIC on
very large tables no longer force VACUUM to retain very old tuples goes
away. We might try to implement it again in a later release, but for
now the risk of indexes missing tuples is too high and there's no easy
fix.
Backpatch to 14, where this change appeared.
Reported-by: Peter Slavov <pet.slavov@gmail.com>
Diagnosys-by: Andrey Borodin <x4mmm@yandex-team.ru>
Diagnosys-by: Michael Paquier <michael@paquier.xyz>
Diagnosys-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/17485-396609c6925b982d%40postgresql.org
Foreign tables can be partitioned, but previous documentation commits
left the syntax synopsis both incomplete and incorrect.
Justin Pryzby and Amit Langote
Discussion: http://postgr.es/m/20220521130922.GX19626@telsasoft.com
psql, pg_dump, and pg_amcheck share code to process object name
patterns like 'foo*.bar*' to match all tables with names starting in
'bar' that are in schemas starting with 'foo'. Before v14, any number
of extra name parts were silently ignored, so a command line '\d
foo.bar.baz.bletch.quux' was interpreted as '\d bletch.quux'. In v14,
as a result of commit 2c8726c4b0, we
instead treated this as a request for table quux in a schema named
'foo.bar.baz.bletch'. That caused problems for people like Justin
Pryzby who were accustomed to copying strings of the form
db.schema.table from messages generated by PostgreSQL itself and using
them as arguments to \d.
Accordingly, revise things so that if an object name pattern contains
more parts than we're expecting, we throw an error, unless there's
exactly one extra part and it matches the current database name.
That way, thisdb.myschema.mytable is accepted as meaning just
myschema.mytable, but otherdb.myschema.mytable is an error, and so
is some.random.garbage.myschema.mytable.
Mark Dilger, per report from Justin Pryzby and discussion among
various people.
Discussion: https://www.postgresql.org/message-id/20211013165426.GD27491%40telsasoft.com