Using --quiet in combination with --no-strict-names didn't work as
documented, a warning message was still emitted. Since the --quiet
flag was working in an unconventional way to other utilities, fix
by removing the functionality instead.
Backpatch through 14 where pg_amcheck was introduced.
Bug: 17148
Reported-by: Chen Jiaoqian <chenjq.jy@fujitsu.com>
Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://postgr.es/m/17148-b5087318e2b04fc6@postgresql.org
Backpatch-through: 14
This reverts the following commits:
1b5617eb844cd2470a334c1d2eec66cf9b39c41a Describe (auto-)analyze behavior for partitioned tables
0e69f705cc1a3df273b38c9883fb5765991e04fe Set pg_class.reltuples for partitioned tables
41badeaba8beee7648ebe7923a41c04f1f3cb302 Document ANALYZE storage parameters for partitioned tables
0827e8af70f4653ba17ed773f123a60eadd9f9c9 autovacuum: handle analyze for partitioned tables
There are efficiency issues in this code when handling databases with
large numbers of partitions, and it doesn't look like there isn't any
trivial way to handle those. There are some other issues as well. It's
now too late in the cycle for nontrivial fixes, so we'll have to let
Postgres 14 users continue to manually deal with ANALYZE their
partitioned tables, and hopefully we can fix the issues for Postgres 15.
I kept [most of] be280cdad298 ("Don't reset relhasindex for partitioned
tables on ANALYZE") because while we added it due to 0827e8af70f4, it is
a good bugfix in its own right, since it affects manual analyze as well
as autovacuum-induced analyze, and there's no reason to revert it.
I retained the addition of relkind 'p' to tables included by
pg_stat_user_tables, because reverting that would require a catversion
bump.
Also, in pg14 only, I keep a struct member that was added to
PgStat_TabStatEntry to avoid breaking compatibility with existing stat
files.
Backpatch to 14.
Discussion: https://postgr.es/m/20210722205458.f2bug3z6qzxzpx2s@alap3.anarazel.de
The check for sslsni only checked for existence of the parameter
but not for the actual value of the param. This meant that the
SNI extension was always turned on. Fix by inspecting the value
of sslsni and only activate the SNI extension iff sslsni has been
enabled. Also update the docs to be more in line with how other
boolean params are documented.
Backpatch to 14 where sslsni was first implemented.
Reviewed-by: Tom Lane
Backpatch-through: 14, where sslni was added
In ec34040af I added a mention that there was no point in setting
maintenance_work_limit to anything higher than 1GB for vacuum, but that
was incorrect as ginInsertCleanup() also looks at what
maintenance_work_mem is set to during VACUUM and that's not limited to
1GB.
Here I attempt to make it more clear that the limitation is only around
the number of dead tuple identifiers that we can collect during VACUUM.
I've also added a note to autovacuum_work_mem to mention this limitation.
I didn't do that in ec34040af as I'd had some wrong-headed ideas about
just limiting the maximum value for that GUC to 1GB.
Author: David Rowley
Discussion: https://postgr.es/m/CAApHDvpGwOAvunp-E-bN_rbAs3hmxMoasm5pzkYDbf36h73s7w@mail.gmail.com
Backpatch-through: 9.6, same as ec34040af
Since commit e462856a7a, pg_upgrade automatically creates a script to
update extensions, so mention that instead of ALTER EXTENSION.
Backpatch-through: 9.6
Copy-and-pasteo in 665c5855e, evidently. The 9.6 docs toolchain
whined about duplicate index entries, though our modern toolchain
doesn't. In any case, these GUCs surely are not about the
default settings of these values.
postgres_fdw imported generated columns from the remote tables as plain
columns, and caused failures like "ERROR: cannot insert a non-DEFAULT
value into column "foo"" when inserting into the foreign tables, as it
tried to insert values into the generated columns. To fix, we do the
following under the assumption that generated columns in a postgres_fdw
foreign table are defined so that they represent generated columns in
the underlying remote table:
* Send DEFAULT for the generated columns to the foreign server on insert
or update, not generated column values computed on the local server.
* Add to postgresImportForeignSchema() an option "import_generated" to
include column generated expressions in the definitions of foreign
tables imported from a foreign server. The option is true by default.
The assumption seems reasonable, because that would make a query of the
postgres_fdw foreign table return values for the generated columns that
are consistent with the generated expression.
While here, fix another issue in postgresImportForeignSchema(): it tried
to include column generated expressions as column default expressions in
the foreign table definitions when the import_default option was enabled.
Per bug #16631 from Daniel Cherniy. Back-patch to v12 where generated
columns were added.
Discussion: https://postgr.es/m/16631-e929fe9db0ffc7cf%40postgresql.org
It wasn't all that clear which lock levels, if any, would be held on the
DEFAULT partition during an ATTACH PARTITION operation.
Also, clarify which locks will be taken if the DEFAULT partition or the
table being attached are themselves partitioned tables.
Here I'm only backpatching to v12 as before then we obtained an ACCESS
EXCLUSIVE lock on the partitioned table. It seems much less relevant to
mention which locks are taken on other tables when the partitioned table
itself is locked with an ACCESS EXCLUSIVE lock.
Author: Matthias van de Meent, David Rowley
Discussion: https://postgr.es/m/CAEze2WiTB6iwrV8W_J=fnrnZ7fowW3qu-8iQ8zCHP3FiQ6+o-A@mail.gmail.com
Backpatch-through: 12
The error messages using the word "non-negative" are confusing
because it's ambiguous about whether it accepts zero or not.
This commit improves those error messages by replacing it with
less ambiguous word like "greater than zero" or
"greater than or equal to zero".
Also this commit added the note about the word "non-negative" to
the error message style guide, to help writing the new error messages.
When postgres_fdw option fetch_size was set to zero, previously
the error message "fetch_size requires a non-negative integer value"
was reported. This error message was outright buggy. Therefore
back-patch to all supported versions where such buggy error message
could be thrown.
Reported-by: Hou Zhijie
Author: Bharath Rupireddy
Reviewed-by: Kyotaro Horiguchi, Fujii Masao
Discussion: https://postgr.es/m/OS0PR01MB5716415335A06B489F1B3A8194569@OS0PR01MB5716.jpnprd01.prod.outlook.com
Add pg_resetxlog -u option to set the oldest xid in pg_control.
Previously -x set this value be -2 billion less than the -x value.
However, this causes the server to immediately scan all relation's
relfrozenxid so it can advance pg_control's oldest xid to be inside the
autovacuum_freeze_max_age range, which is inefficient and might disrupt
diagnostic recovery. pg_upgrade will use this option to better create
the new cluster to match the old cluster.
Reported-by: Jason Harvey, Floris Van Nee
Discussion: https://postgr.es/m/20190615183759.GB239428@rfd.leadboat.com, 87da83168c644fd9aae38f546cc70295@opammb0562.comp.optiver.com
Author: Bertrand Drouvot
Backpatch-through: 9.6
The documentation mentioned the use of log_checkpoints, that cannot be
used in this context. This commit replaces log_checkpoints with
force_parallel_mode, a developer option useful to perform checks related
to parallelism.
Oversight in 854434c.
Author: Haiying Tang
Discussion: https://postgr.es/m/OS0PR01MB6113954B883ACEB2DDC973F2FBE59@OS0PR01MB6113.jpnprd01.prod.outlook.com
Backpatch-through: 14
It has been spotted that multiranges lack of ability to decompose them into
individual ranges. Subscription and proper expanded object representation
require substantial work, and it's too late for v14. This commit
provides the implementation of unnest(multirange), which is quite trivial.
unnest(multirange) is defined as a polymorphic procedure.
Catversion is bumped.
Reported-by: Jonathan S. Katz
Discussion: https://postgr.es/m/flat/60258efe-bd7e-4886-82e1-196e0cac5433%40postgresql.org
Author: Alexander Korotkov
Reviewed-by: Justin Pryzby, Jonathan S. Katz, Zhihong Yu, Tom Lane
Reviewed-by: Alvaro Herrera
We had documentation of default_transaction_isolation et al,
but for some reason not of transaction_isolation et al.
AFAICS this is just an ancient oversight, so repair.
Per bug #17077 from Yanliang Lei.
Discussion: https://postgr.es/m/17077-ade8e166a01e1374@postgresql.org
Ensure to properly mark up function parameters in text with <parameter>,
avoid using <acronym> for terms which aren't acronyms and properly place
the ", and" in a value list. The acronym removal is a follow-up to commit
fb72a7b8c3 which removed it for minmax-multi. In passing, also fix an
incorrectly cased word.
Author: Ekaterina Kiryanova <e.kiryanova@postgrespro.ru>
Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at>
Discussion: https://postgr.es/m/c050ecbc-80b2-b360-3c1d-9fe6a6a11bb5@postgrespro.ru
Backpatch-through: v14
"Result Cache" was never a great name for this node, but nobody managed
to come up with another name that anyone liked enough. That was until
David Johnston mentioned "Node Memoization", which Tom Lane revised to
just "Memoize". People seem to like "Memoize", so let's do the rename.
Reviewed-by: Justin Pryzby
Discussion: https://postgr.es/m/20210708165145.GG1176@momjian.us
Backpatch-through: 14, where Result Cache was introduced
The name introduced by commit 4656e3d66 was agreed to be unreasonably
long. To match this change, rename initdb's recently-added
--clobber-cache option to --discard-caches.
Discussion: https://postgr.es/m/1374320.1625430433@sss.pgh.pa.us
When sending queries in pipeline mode, we were careless about leaving
the connection in the right state so that PQgetResult would behave
correctly; trying to read further results after sending a query after
having read a result with an error would sometimes hang. Fix by
ensuring internal libpq state is changed properly. All the state
changes were being done by the callers of pqAppendCmdQueueEntry(); it
would have become too repetitious to have this logic in each of them, so
instead put it all in that function and relieve callers of the
responsibility.
Add a test to verify this case. Without the code fix, this new test
hangs sometimes.
Also, document that PQisBusy() would return false when no queries are
pending result. This is not intuitively obvious, and NULL would be
obtained by calling PQgetResult() at that point, which is confusing.
Wording by Boris Kolpackov.
In passing, fix bogus use of "false" to mean "0", per Ranier Vilela.
Backpatch to 14.
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reported-by: Boris Kolpackov <boris@codesynthesis.com>
Discussion: https://postgr.es/m/boris.20210624103805@codesynthesis.com
This commit fixes wrong wording like "a fewer kinds"
in the description about track_planning option.
Back-patch to v13 where pg_stat_statements.track_planning was added.
Author: Justin Pryzby
Reviewed-by: Julien Rouhaud, Fujii Masao
Discussion: https://postgr.es/m/20210418233615.GB7256@telsasoft.com
Previously the values such as '100$%$#$#', '9,223,372,' were accepted and
treated as valid integers for postgres_fdw options batch_size and fetch_size.
Whereas this is not the case with fdw_startup_cost and fdw_tuple_cost options
for which an error is thrown. This was because endptr was not used
while converting strings to integers using strtol.
This commit changes the logic so that it uses parse_int function
instead of strtol as it serves the purpose by returning false in case
if it is unable to convert the string to integer. Note that
this function also rounds off the values such as '100.456' to 100 and
'100.567' or '100.678' to 101.
While on this, use parse_real for fdw_startup_cost and fdw_tuple_cost options.
Since parse_int and parse_real are being used for reloptions and GUCs,
it is more appropriate to use in postgres_fdw rather than using strtol
and strtod directly.
Back-patch to v14.
Author: Bharath Rupireddy
Reviewed-by: Ashutosh Bapat, Tom Lane, Kyotaro Horiguchi, Fujii Masao
Discussion: https://postgr.es/m/CALj2ACVMO6wY5Pc4oe1OCgUOAtdjHuFsBDw8R5uoYR86eWFQDA@mail.gmail.com