In ec34040af I added a mention that there was no point in setting
maintenance_work_limit to anything higher than 1GB for vacuum, but that
was incorrect as ginInsertCleanup() also looks at what
maintenance_work_mem is set to during VACUUM and that's not limited to
1GB.
Here I attempt to make it more clear that the limitation is only around
the number of dead tuple identifiers that we can collect during VACUUM.
I've also added a note to autovacuum_work_mem to mention this limitation.
I didn't do that in ec34040af as I'd had some wrong-headed ideas about
just limiting the maximum value for that GUC to 1GB.
Author: David Rowley
Discussion: https://postgr.es/m/CAApHDvpGwOAvunp-E-bN_rbAs3hmxMoasm5pzkYDbf36h73s7w@mail.gmail.com
Backpatch-through: 9.6, same as ec34040af
This saves a few lines of code. Also add a comment to mention why we use
ExplainPropertyInteger instead of ExplainPropertyUInteger given that
queryid is a uint64 type.
Author: David Rowley
Reviewed-by: Julien Rouhaud
Discussion: https://postgr.es/m/CAApHDvqhSLYpSU_EqUdN39w9Uvb8ogmHV7_3YhJ0S3aScGBjsg@mail.gmail.com
Backpatch-through: 14, where this code was originally added
Since commit e462856a7a, pg_upgrade automatically creates a script to
update extensions, so mention that instead of ALTER EXTENSION.
Backpatch-through: 9.6
Copy-and-pasteo in 665c5855e, evidently. The 9.6 docs toolchain
whined about duplicate index entries, though our modern toolchain
doesn't. In any case, these GUCs surely are not about the
default settings of these values.
I had committer's remorse almost immediately after pushing cb76fbd7e,
upon finding that removing capturing subexpressions' subREs from the
data structure broke my proposed patch for REG_NOSUB optimization.
Revert that data structure change. Instead, address the concern
about not changing capturing subREs' endpoints by not changing the
endpoints. We don't need to, because the point of that bit was just
to ensure that the atom has endpoints distinct from the outer state
pair that we're stringing the branch between. We already made
suitable states in the parenthesized-subexpression case, so the
additional ones were just useless overhead. This seems more
understandable than Spencer's original coding, and it ought to be
a shade faster too by saving a few state creations and arc changes.
(I actually see a couple percent improvement on Jacobson's web
corpus, though that's barely above the noise floor so I wouldn't
put much stock in that result.)
Also, fix the logic added by ea1268f63 to ensure that the subRE
recorded in v->subs[subno] is exactly the one with capno == subno.
Spencer's original coding recorded the child subRE of the capture
node, which is okay so far as having the right endpoint states is
concerned, but as of cb76fbd7e the capturing subRE itself always
has those endpoints too. I think the inconsistency is confusing
for the REG_NOSUB optimization.
As before, backpatch to v14.
Discussion: https://postgr.es/m/0203588E-E609-43AF-9F4F-902854231EE7@enterprisedb.com
Up to now, we remembered the definition of a capturing parenthesis
subexpression by storing a pointer to the associated subRE node.
That was okay before, because that subRE didn't get modified anymore
while parsing the rest of the regexp. However, in the wake of
commit ea1268f63, that's no longer true: the outer invocation of
parseqatom() feels free to scribble on that subRE. This seems to
work anyway, because the states we jam into the child atom in the
"prepare a general-purpose state skeleton" stanza aren't really
semantically different from the original endpoints of the child atom.
But that would be mighty easy to break, and it's definitely not how
things worked before.
Between this and the issue fixed in the prior commit, it seems best
to get rid of this dependence on subRE nodes entirely. We don't need
the whole child subRE for future backrefs, only its starting and ending
NFA states; so let's just store pointers to those.
Also, in the corner case where we make an extra subRE to handle
immediately-nested capturing parentheses, it seems like it'd be smart
to have the extra subRE have the same begin/end states as the original
child subRE does (s/s2 not lp/rp). I think that linking it from lp to
rp might actually be semantically wrong, though since Spencer's original
code did it that way, I'm not totally certain. Using s/s2 is certainly
not wrong, in any case.
Per report from Mark Dilger. Back-patch to v14 where the problematic
patches came in.
Discussion: https://postgr.es/m/0203588E-E609-43AF-9F4F-902854231EE7@enterprisedb.com
Commit cebc1d34e taught parseqatom() to optimize cases where a branch
contains only one, "messy", atom by getting rid of excess subRE nodes.
The way we really should do that is to keep the subRE built for the
"messy" child atom; but to avoid changing parseqatom's nominal API,
I made it delete that node after copying its fields to the outer subRE
made by parsebranch(). It seems that that actually worked at the time;
but it became dangerous after ea1268f63, because that later commit
allowed the lower invocation of parse() to return a subRE that was also
pointed to by some v->subs[] entry. This meant we could wind up with a
dangling pointer in v->subs[], allowing a later backref to misbehave,
but only if that subRE struct had been reused in between. So the damage
seems confined to cases like '((...))...(...\2'.
To fix, do what I should have done before and modify parseqatom's API
to make it possible for it to remove the caller's subRE instead of the
callee's. That's safer because we know that subRE isn't complete yet,
so noplace else will have a pointer to it.
Per report from Mark Dilger. Back-patch to v14 where the problematic
patches came in.
Discussion: https://postgr.es/m/0203588E-E609-43AF-9F4F-902854231EE7@enterprisedb.com
Rather than trying to pick table aliases that won't conflict with
any possible user-defined matview column name, adjust the queries'
syntax so that the aliases are only used in places where they can't be
mistaken for column names. Mostly this consists of writing "alias.*"
not just "alias", which adds clarity for humans as well as machines.
We do have the issue that "SELECT alias.*" acts differently from
"SELECT alias", but we can use the same hack ruleutils.c uses for
whole-row variables in SELECT lists: write "alias.*::compositetype".
We might as well revert to the original aliases after doing this;
they're a bit easier to read.
Like 75d66d10e, back-patch to all supported branches.
Discussion: https://postgr.es/m/2488325.1628261320@sss.pgh.pa.us
Casting a value that's already of a type with a specific typmod
to an unspecified typmod doesn't do anything so far as run-time
behavior is concerned. However, it really ought to change the
exposed type of the expression to match. Up to now,
coerce_type_typmod hasn't bothered with that, which creates gotchas
in contexts such as recursive unions. If for example one side of
the union is numeric(18,3), but it needs to be plain numeric to
match the other side, there's no direct way to express that.
This is easy enough to fix, by inserting a RelabelType to update the
exposed type of the expression. However, it's a bit nervous-making
to change this behavior, because it's stood for a really long time.
(I strongly suspect that it's like this in part because the logic
pre-dates the introduction of RelabelType in 7.0. The commit log
message for 57b30e8e2 is interesting reading here.) As a compromise,
we'll sneak the change into 14beta3, and consider back-patching to
stable branches if no complaints emerge in the next three months.
Discussion: https://postgr.es/m/CABNQVagu3bZGqiTjb31a8D5Od3fUMs7Oh3gmZMQZVHZ=uWWWfQ@mail.gmail.com
Formerly, the numeric code tested whether an integer value of a larger
type would fit in a smaller type by casting it to the smaller type and
then testing if the reverse conversion produced the original value.
That's perfectly fine, except that it caused a test failure on
buildfarm animal castoroides, most likely due to a compiler bug.
Instead, do these tests by comparing against PG_INT16/32_MIN/MAX. That
matches existing code in other places, such as int84(), which is more
widely tested, and so is less likely to go wrong.
While at it, add regression tests covering the numeric-to-int8/4/2
conversions, and adjust the recently added tests to the style of
434ddfb79a (on the v11 branch) to make failures easier to diagnose.
Per buildfarm via Tom Lane, reviewed by Tom Lane.
Discussion: https://postgr.es/m/2394813.1628179479%40sss.pgh.pa.us
postgres_fdw imported generated columns from the remote tables as plain
columns, and caused failures like "ERROR: cannot insert a non-DEFAULT
value into column "foo"" when inserting into the foreign tables, as it
tried to insert values into the generated columns. To fix, we do the
following under the assumption that generated columns in a postgres_fdw
foreign table are defined so that they represent generated columns in
the underlying remote table:
* Send DEFAULT for the generated columns to the foreign server on insert
or update, not generated column values computed on the local server.
* Add to postgresImportForeignSchema() an option "import_generated" to
include column generated expressions in the definitions of foreign
tables imported from a foreign server. The option is true by default.
The assumption seems reasonable, because that would make a query of the
postgres_fdw foreign table return values for the generated columns that
are consistent with the generated expression.
While here, fix another issue in postgresImportForeignSchema(): it tried
to include column generated expressions as column default expressions in
the foreign table definitions when the import_default option was enabled.
Per bug #16631 from Daniel Cherniy. Back-patch to v12 where generated
columns were added.
Discussion: https://postgr.es/m/16631-e929fe9db0ffc7cf%40postgresql.org
This fixes a long-standing bug when using to_char() to format a
numeric value in scientific notation -- if the value's exponent is
less than -NUMERIC_MAX_DISPLAY_SCALE-1 (-1001), it produced a
division-by-zero error.
The reason for this error was that get_str_from_var_sci() divides its
input by 10^exp, which it produced using power_var_int(). However, the
underflow test in power_var_int() causes it to return zero if the
result scale is too small. That's not a problem for power_var_int()'s
only other caller, power_var(), since that limits the rscale to 1000,
but in get_str_from_var_sci() the exponent can be much smaller,
requiring a much larger rscale. Fix by introducing a new function to
compute 10^exp directly, with no rscale limit. This also allows 10^exp
to be computed more efficiently, without any numeric multiplication,
division or rounding.
Discussion: https://postgr.es/m/CAEZATCWhojfH4whaqgUKBe8D5jNHB8ytzemL-PnRx+KCTyMXmg@mail.gmail.com
Up to now we did a PQconsumeInput() for each pipelined query, asking the OS
for more input - which it often won't have, as all results might already have
been sent. That turns out to have a noticeable performance impact.
Alvaro Herrera reviewed the idea to add the PQisBusy() check, but not this
concrete patch.
Author: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20210720180039.23rivhdft3l4mayn@alap3.anarazel.de
Backpatch: 14, where libpq/pgbench pipelining was introduced.
Oversight in commit 3499df0d, which generalized the reloption as a way
of giving users a way to consistently avoid VACUUM's index bypass
optimization.
Per off-list report from Nikolay Shaplov.
Backpatch: 14-, where index cleanup reloption was extended.
I failed to account for the possibility that when
ExecAppendAsyncEventWait() notifies multiple async-capable nodes using
postgres_fdw, a preceding node might invoke process_pending_request() to
process a pending asynchronous request made by a succeeding node. In
that case the succeeding node should produce a tuple to return to the
parent Append node from tuples fetched by process_pending_request() when
notified. Repair.
Per buildfarm via Michael Paquier. Back-patch to v14, like the previous
commit.
Thanks to Tom Lane for testing.
Discussion: https://postgr.es/m/YQP0UPT8KmPiHTMs%40paquier.xyz
As of commit 84f5c2908, executing SQL commands (via SPI or otherwise)
requires having either an active Portal, or a caller-established
active snapshot. We were simply Assert'ing that that's the case.
But we've now had a couple different reports of people testing
extensions that didn't meet this requirement, and were confused by
the resulting crash. Let's convert the Assert to a test-and-elog,
in hopes of making the issue clearer for extension authors.
Per gripes from Liu Huailing and RekGRpth. Back-patch to v11,
like the prior commit.
Discussion: https://postgr.es/m/OSZPR01MB6215671E3C5956A034A080DFBEEC9@OSZPR01MB6215.jpnprd01.prod.outlook.com
Discussion: https://postgr.es/m/17035-14607d308ac8643c@postgresql.org
This fixes a couple of related problems that arise when raising
numbers to very large powers.
Firstly, when raising a negative number to a very large integer power,
the result should be well-defined, but the previous code would only
cope if the exponent was small enough to go through power_var_int().
Otherwise it would throw an internal error, attempting to take the
logarithm of a negative number. Fix this by adding suitable handling
to the general case in power_var() to cope with negative bases,
checking for integer powers there.
Next, when raising a (positive or negative) number whose absolute
value is slightly less than 1 to a very large power, the result should
approach zero as the power is increased. However, in some cases, for
sufficiently large powers, this would lose all precision and return 1
instead of 0. This was due to the way that the local_rscale was being
calculated for the final full-precision calculation:
local_rscale = rscale + (int) val - ln_dweight + 8
The first two terms on the right hand side are meant to give the
number of significant digits required in the result ("val" being the
estimated result weight). However, this failed to account for the fact
that rscale is clipped to a maximum of NUMERIC_MAX_DISPLAY_SCALE
(1000), and the result weight might be less then -1000, causing their
sum to be negative, leading to a loss of precision. Fix this by
forcing the number of significant digits calculated to be nonnegative.
It's OK for it to be zero (when the result weight is less than -1000),
since the local_rscale value then includes a few extra digits to
ensure an accurate result.
Finally, add additional underflow checks to exp_var() and power_var(),
so that they consistently return zero for cases like this where the
result is indistinguishable from zero. Some paths through this code
already returned zero in such cases, but others were throwing overflow
errors.
Dean Rasheed, reviewed by Yugo Nagata.
Discussion: http://postgr.es/m/CAEZATCW6Dvq7+3wN3tt5jLj-FyOcUgT5xNoOqce5=6Su0bCR0w@mail.gmail.com
CheckpointLock was removed in commit d18e75664a, and commit ce197e91d0
updated a leftover comment in CreateCheckPoint, but there was another
copy of it in CreateRestartPoint still.
A pending asynchronous request is handled by process_pending_request(),
which previously not only processed an in-progress remote query but
performed ExecForeignScan() to produce a tuple to return to the local
server asynchronously from the result of the remote query. But that led
to a server crash when executing a query or led to an "InstrStartNode
called twice in a row" or "InstrEndLoop called on running node" failure
when doing EXPLAIN ANALYZE of it, in cases where the plan tree for it
contained multiple async-capable nodes accessing the same
initplan/subplan that contained multiple async-capable nodes scanning
the same foreign tables as for the parent async-capable nodes, as
reported by Andrey Lepikhov. The reason is that the second step in
process_pending_request() invoked when executing the initplan/subplan
for one of the parent async-capable nodes caused recursive execution of
the initplan/subplan for another of the parent async-capable nodes.
To fix, split process_pending_request() into the two steps and postpone
the second step until ForeignAsyncConfigureWait() is called for each of
the pending asynchronous requests. Also, in ExecAppendAsyncEventWait()
we assumed that FDWs would register at least one wait event in a
WaitEventSet created there when they were called from
ForeignAsyncConfigureWait() in that function, but allow FDWs to register
zero wait events in the WaitEventSet; modify ExecAppendAsyncEventWait()
to just return in that case.
Oversight in commit 27e1f1456. Back-patch to v14 where that commit went
in.
Andrey Lepikhov and Etsuro Fujita
Discussion: https://postgr.es/m/fe5eaa19-1704-e4a4-76ee-3b9d37ade399@postgrespro.ru
Buildfarm shows that this test has a further failure mode when a
checkpoint starts earlier than expected, so we detect a "checkpoint
completed" line that's not the one we want. Change the config to try
and prevent this.
Per buildfarm
While at it, update one comment that was forgotten in commit
d18e75664a2f.
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/20210729.162038.534808353849568395.horikyota.ntt@gmail.com
Commit ffa2e4670 changed libpq so that multiple error reports
occurring during one operation (a connection attempt or query)
are accumulated in conn->errorMessage, where before new ones
usually replaced any prior error. At least in theory, that makes
us more vulnerable to running out of memory for the errorMessage
buffer. If it did happen, the user would be left with just an
empty-string error report, which is pretty unhelpful.
We can improve this by relying on pqexpbuffer.c's existing "broken
buffer" convention to track whether we've hit OOM for the current
operation's error string, and then substituting a constant "out of
memory" string in the small number of places where the errorMessage
is read out.
While at it, apply the same method to similar OOM cases in
pqInternalNotice and pqGetErrorNotice3.
Back-patch to v14 where ffa2e4670 came in. In principle this could
go back further; but in view of the lack of field reports, the
hazard seems negligible in older branches.
Discussion: https://postgr.es/m/530153.1627425648@sss.pgh.pa.us
Certain versions of msys2/Windows have been observed to resolve symlinks
in perl2host rather than just follow them. This defeats using a
symlinked shorter path to a longer path, and makes certain tests fail.
We therefore call perl2host on the parent directory of the symlink and
thereafter just use that result.
Apply to release 14 where the problem has been observed.
Sometimes cygpath has been observed to return a path with a trailing
slash. That can cause problems, Also, make "cygpath" usage
consistent with "pwd -W" with respect to the use of forward slashes.
Backpatch to release 14 where the current code was introduced.
pg_verifybackup needs by default pg_waldump to check after a range of
WAL segments required for a backup, except if --no-parse-wal is
specified. The code checked for the presence of the binary pg_waldump
in an installation and reported an error, but it forgot to properly
exit(). This could lead to confusing errors reported.
Reviewed-by: Robert Haas, Fabien Coelho
Discussion: https://postgr.es/m/YQDMdB+B68yePFeT@paquier.xyz
Backpatch-through: 13
If a file is truncated, we must update minRecoveryPoint. Once a file is
truncated, there's no going back; it would not be safe to stop recovery
at a point earlier than that anymore.
Commit 7bffc9b7bf changed xact_redo_commit() so that it updates
minRecoveryPoint on truncation, but forgot to change xact_redo_abort().
Back-patch to all supported versions.
Reported-by: mengjuan.cmj@alibaba-inc.com
Author: Fujii Masao
Reviewed-by: Heikki Linnakangas
Discussion: https://postgr.es/m/b029fce3-4fac-4265-968e-16f36ff4d075.mengjuan.cmj@alibaba-inc.com
It wasn't all that clear which lock levels, if any, would be held on the
DEFAULT partition during an ATTACH PARTITION operation.
Also, clarify which locks will be taken if the DEFAULT partition or the
table being attached are themselves partitioned tables.
Here I'm only backpatching to v12 as before then we obtained an ACCESS
EXCLUSIVE lock on the partitioned table. It seems much less relevant to
mention which locks are taken on other tables when the partitioned table
itself is locked with an ACCESS EXCLUSIVE lock.
Author: Matthias van de Meent, David Rowley
Discussion: https://postgr.es/m/CAEze2WiTB6iwrV8W_J=fnrnZ7fowW3qu-8iQ8zCHP3FiQ6+o-A@mail.gmail.com
Backpatch-through: 12
This changes the behavior of examining the pg_file_settings view after
changing a config option that requires restart. The user needs to know
that any change of such options does not take effect until a restart,
and this worked correctly if the line is edited without removing it.
However, for the case where the line is removed altogether, the flag
doesn't get set, because a flag was only set in set_config_option, but
that's not called for lines removed. Repair.
(Ref.: commits 62d16c7fc561 and a486e35706ea)
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/202107262302.xsfdfc5sb7sh@alvherre.pgsql
We failed to deal with an UNKNOWN-type input for
anycompatiblemultirange; that should throw an error indicating
that we don't know how to resolve the multirange type.
We also failed to infer the type of an anycompatiblerange output
from an anycompatiblemultirange input or vice versa.
Per bug #17066 from Alexander Lakhin. Back-patch to v14
where multiranges were added.
Discussion: https://postgr.es/m/17066-16a37f6223a8470b@postgresql.org
The error messages using the word "non-negative" are confusing
because it's ambiguous about whether it accepts zero or not.
This commit improves those error messages by replacing it with
less ambiguous word like "greater than zero" or
"greater than or equal to zero".
Also this commit added the note about the word "non-negative" to
the error message style guide, to help writing the new error messages.
When postgres_fdw option fetch_size was set to zero, previously
the error message "fetch_size requires a non-negative integer value"
was reported. This error message was outright buggy. Therefore
back-patch to all supported versions where such buggy error message
could be thrown.
Reported-by: Hou Zhijie
Author: Bharath Rupireddy
Reviewed-by: Kyotaro Horiguchi, Fujii Masao
Discussion: https://postgr.es/m/OS0PR01MB5716415335A06B489F1B3A8194569@OS0PR01MB5716.jpnprd01.prod.outlook.com
Add pg_resetxlog -u option to set the oldest xid in pg_control.
Previously -x set this value be -2 billion less than the -x value.
However, this causes the server to immediately scan all relation's
relfrozenxid so it can advance pg_control's oldest xid to be inside the
autovacuum_freeze_max_age range, which is inefficient and might disrupt
diagnostic recovery. pg_upgrade will use this option to better create
the new cluster to match the old cluster.
Reported-by: Jason Harvey, Floris Van Nee
Discussion: https://postgr.es/m/20190615183759.GB239428@rfd.leadboat.com, 87da83168c644fd9aae38f546cc70295@opammb0562.comp.optiver.com
Author: Bertrand Drouvot
Backpatch-through: 9.6
Commit ad600bba04 added psql command \dX listing extended statistics
objects, but it failed to consider search_path when selecting the
elements so some of the returned elements might be invisible.
The visibility was already considered for tab completion (added by
commit d99d58cdc8), so adding it to the query is fairly simple.
Reported and fix by Justin Pryzby, regression tests by me. Backpatch
to PostgreSQL 14, where \dX was introduced.
Batchpatch-through: 14
Author: Justin Pryzby
Reviewed-by: Tatsuro Yamada
Discussion: https://postgr.es/m/c027a541-5856-75a5-0868-341301e1624b%40nttcom.co.jp_1
The documentation mentioned the use of log_checkpoints, that cannot be
used in this context. This commit replaces log_checkpoints with
force_parallel_mode, a developer option useful to perform checks related
to parallelism.
Oversight in 854434c.
Author: Haiying Tang
Discussion: https://postgr.es/m/OS0PR01MB6113954B883ACEB2DDC973F2FBE59@OS0PR01MB6113.jpnprd01.prod.outlook.com
Backpatch-through: 14