Reindexing temp tables or indexes of other sessions is not allowed.
However, reindexdb in parallel mode previously listed them as
the objects to process, leading to failures.
This commit ensures reindexdb in parallel mode skips temporary tables
and indexes by adding a condition based on the relpersistence column
in pg_class to the object listing queries, preventing these issues.
Note that this commit does not affect reindexdb when temporary tables
or indexes are explicitly specified using the -t or -j options;
reindexdb in that case still does not skip them and can cause an error.
Back-patch to v13 where parallel mode was introduced in reindexdb.
Author: Fujii Masao
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/5f37ee56-14fb-44fe-9150-9eb97e10538b@oss.nttdata.com
Defect in last week's commit aac2c9b4fde889d13f859c233c2523345e72d32b,
per Coverity. Reaching this would need catalog corruption. Back-patch
to v12, like that commit.
An invalid toast index is skipped in reindex_relation(). These would be
remnants of a failed REINDEX CONCURRENTLY and they should never been
rebuilt as there can only be one valid toast index at a time.
REINDEX_REL_SUPPRESS_INDEX_USE, used by CLUSTER and VACUUM FULL, needs
to maintain a list of the indexes being processed. The list of indexes
is retrieved from the relation cache, and includes invalid indexes. The
code has missed that invalid toast indexes are ignored in
reindex_relation() as this leads to a hard failure in reindex_index(),
and they were left in the reindex pending list, making the list
inconsistent when rechecked. The incorrect memory access was happening
when scanning pg_class for the refresh of pg_database.datfrozenxid, when
doing a scan of pg_class.
This issue exists since REINDEX CONCURRENTLY exists, where invalid toast
indexes can exist, so backpatch all the way down.
Reported-by: Alexander Lakhin
Author: Tender Wang
Discussion: https://postgr.es/m/18630-9aed99c38830657d@postgresql.org
Backpatch-through: 12
There's no need to add another level of indentation to this status
message. pg_log() will put it in the right place.
Oversight in commit 347758b120.
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/ZunW7XHLd2uTts4f%40nathan
Backpatch-through: 17
Running vacuumdb with a non-superuser while another user has created a
temporary table would lead to a mid-flight permission failure,
interrupting the operation. vacuum_rel() skips temporary relations of
other backends, and it makes no sense for vacuumdb to know about these
relations, so let's switch it to ignore temporary relations entirely.
Adding a qual in the query based on relpersistence simplifies the
generation of its WHERE clause in vacuum_one_database(), per se the
removal of "has_where".
Author: VaibhaveS, Michael Paquier
Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/CAM_eQjwfAR=y3G1fGyS1U9FTmc+FyJm9amNfY2QCZBnDDbNPZg@mail.gmail.com
Backpatch-through: 12
The previous commit fixed some ways of losing an inplace update. It
remained possible to lose one when a backend working toward a
heap_update() copied a tuple into memory just before inplace update of
that tuple. In catalogs eligible for inplace update, use LOCKTAG_TUPLE
to govern admission to the steps of copying an old tuple, modifying it,
and issuing heap_update(). This includes MERGE commands. To avoid
changing most of the pg_class DDL, don't require LOCKTAG_TUPLE when
holding a relation lock sufficient to exclude inplace updaters.
Back-patch to v12 (all supported versions). In v13 and v12, "UPDATE
pg_class" or "UPDATE pg_database" can still lose an inplace update. The
v14+ UPDATE fix needs commit 86dc90056dfdbd9d1b891718d2e5614e3e432f35,
and it wasn't worth reimplementing that fix without such infrastructure.
Reviewed by Nitin Motiani and (in earlier versions) Heikki Linnakangas.
Discussion: https://postgr.es/m/20231027214946.79.nmisch@google.com
As previously-added tests demonstrated, heap_inplace_update() could
instead update an unrelated tuple of the same catalog. It could lose
the update. Losing relhasindex=t was a source of index corruption.
Inplace-updating commands like VACUUM will now wait for heap_update()
commands like GRANT TABLE and GRANT DATABASE. That isn't ideal, but a
long-running GRANT already hurts VACUUM progress more just by keeping an
XID running. The VACUUM will behave like a DELETE or UPDATE waiting for
the uncommitted change.
For implementation details, start at the systable_inplace_update_begin()
header comment and README.tuplock. Back-patch to v12 (all supported
versions). In back branches, retain a deprecated heap_inplace_update(),
for extensions.
Reported by Smolkin Grigory. Reviewed by Nitin Motiani, (in earlier
versions) Heikki Linnakangas, and (in earlier versions) Alexander
Lakhin.
Discussion: https://postgr.es/m/CAMp+ueZQz3yDk7qg42hk6-9gxniYbp-=bG2mgqecErqR5gGGOA@mail.gmail.com
The current use always releases this locktag. A planned use will
continue that intent. It will involve more areas of code, making unlock
omissions easier. Warn under debug_assertions, like we do for various
resource leaks. Back-patch to v12 (all supported versions), the plan
for the commit of the new use.
Reviewed by Heikki Linnakangas.
Discussion: https://postgr.es/m/20240512232923.aa.nmisch@google.com
Commit d1379ebf4 carelessly broke printACLColumn for pre-9.4 servers,
by using the cardinality() function which we introduced in 9.4.
We expect psql's describe-related commands to work back to 9.2, so
this is bad. Use the longstanding array_length() function instead.
Per report from Christoph Berg. Back-patch to v17.
Discussion: https://postgr.es/m/ZvLXYglRS6hMMhtr@msg.df7cb.de
Commit 5bf748b8, which enhanced nbtree ScalarArrayOp execution, made
parallel index scans work with the new design for arrays via explicit
scheduling of primitive index scans. Under this scheme a parallel index
scan with array keys will perform the same number of index descents as
an equivalent serial index scan (barring corner cases where an
individual parallel worker discovers that it can advance the scan's
array keys without anybody needing to perform another descent of the
index to get to the relevant page on the leaf level).
Despite all this, the pgstats accounting wasn't updated; it continued to
increment the total number of index scans for the rel once per _bt_first
call, no matter the details. As a result, the number of (primitive)
index scans could be over-counted during parallel scans.
To fix, delay incrementing the count of index scans until after we've
established that another descent of the index (using either _bt_search
or _bt_endpoint) is required. That way pg_stat_user_tables.idx_scan
always advances in the same way, regardless of whether or not the scan
makes use of parallelism.
Oversight in commit 5bf748b8, which enhanced nbtree ScalarArrayOp
execution.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Tomas Vondra <tomas@vondra.me>
Discussion: https://postgr.es/m/CAH2-Wz=E7XrkvscBN0U6V81NK3Q-dQOmivvbEsjG-zwEfDdFpg@mail.gmail.com
Discussion: https://postgr.es/m/CAH2-WzkRqvaqR2CTNqTZP0z6FuL4-3ED6eQB0yx38XBNj1v-4Q@mail.gmail.com
Backpatch: 17-, where nbtree SAOP execution was enhanced.
Calling \bind repeatedly would cause the memory allocated for the list
of bind parameters to be leaked after each call, as the list is reset
when beginning a single call.
This issue is fixed by making the cleanup of the bind parameter list
more aggressive, refactoring it into a single routine called after
processing a query and before running an individual \bind.
HEAD required more surgery and has been fixed by 87eeadaea143. Issue
introduced by 5b66de3433e2.
Reported-by: Anthonin Bonnefoy
Discussion: https://postgr.es/m/2e5b89af-a351-ff0a-000c-037ac28314ab@gmail.com
Backpatch-through: 16
This commit adds query ID reports for two code paths when processing
extended query protocol messages:
- When receiving a bind message, setting it to the first Query retrieved
from a cached cache.
- When receiving an execute message, setting it to the first PlannedStmt
stored in a portal.
An advantage of this method is that this is able to cover all the types
of portals handled in the extended query protocol, particularly these
two when the report done in ExecutorStart() is not enough (neither is an
addition in ExecutorRun(), actually, for the second point):
- Multiple execute messages, with multiple ExecutorRun().
- Portal with execute/fetch messages, like a query with a RETURNING
clause and a fetch size that stores the tuples in a first execute
message going though ExecutorStart() and ExecuteRun(), followed by one
or more execute messages doing only fetches from the tuplestore created
in the first message. This corresponds to the case where
execute_is_fetch is set, for example.
Note that the query ID reporting done in ExecutorStart() is still
necessary, as an EXECUTE requires it. Query ID reporting is optimistic
and more calls to pgstat_report_query_id() don't matter as the first
report takes priority except if the report is forced. The comment in
ExecutorStart() is adjusted to reflect better the reality with the
extended query protocol.
The test added in pg_stat_statements is a courtesy of Robert Haas. This
uses psql's \bind metacommand, hence this part is backpatched down to
v16.
Reported-by: Kaido Vaikla, Erik Wienhold
Author: Sami Imseih
Reviewed-by: Jian He, Andrei Lepikhov, Michael Paquier
Discussion: https://postgr.es/m/CA+427g8DiW3aZ6pOpVgkPbqK97ouBdf18VLiHFesea2jUk3XoQ@mail.gmail.com
Discussion: https://postgr.es/m/CA+TgmoZxtnf_jZ=VqBSyaU8hfUkkwoJCJ6ufy4LGpXaunKrjrg@mail.gmail.com
Discussion: https://postgr.es/m/1391613709.939460.1684777418070@office.mailbox.org
Backpatch-through: 14
Commits 041b9680 and 6377e12a changed the interface of
scan_analyze_next_block() to take a ReadStream instead of a BlockNumber
and a BufferAccessStrategy, and to return a value to indicate when the
stream has run out of blocks.
This caused integration problems for at least one known extension that
uses specially encoded BlockNumber values that map to different
underlying storage, because acquire_sample_rows() sets up the stream so
that read_stream_next_buffer() reads blocks from the main fork of the
relation's SMgrRelation.
Provide read_stream_next_block(), as a way for such an extension to
access the stream of raw BlockNumbers directly and forward them to its
own ReadBuffer() calls after decoding, as it could in earlier releases.
The new function returns the BlockNumber and BufferAccessStrategy that
were previously passed directly to scan_analyze_next_block().
Alternatively, an extension could wrap the stream of BlockNumbers in
another ReadStream with a callback that performs any decoding required
to arrive at real storage manager BlockNumber values, so that it could
benefit from the I/O combining and concurrency provided by
read_stream.c.
Another class of table access method that does nothing in
scan_analyze_next_block() because it is not block-oriented could use
this function to control the number of block sampling loops. It could
match the previous behavior with "return read_stream_next_block(stream,
&bas) != InvalidBlockNumber".
Ongoing work is expected to provide better ANALYZE support for table
access methods that don't behave like heapam with respect to storage
blocks, but that will be for future releases.
Back-patch to 17.
Reported-by: Mats Kindahl <mats@timescale.com>
Reviewed-by: Mats Kindahl <mats@timescale.com>
Discussion: https://postgr.es/m/CA%2B14425%2BCcm07ocG97Fp%2BFrD9xUXqmBKFvecp0p%2BgV2YYR258Q%40mail.gmail.com
Since we introduced unlogged sequences in v15, identity sequences
have defaulted to having the same persistence as their owning table.
However, it is possible to change that with ALTER SEQUENCE, and
pg_dump tries to preserve the logged-ness of sequences when it doesn't
match (as indeed it wouldn't for an unlogged table from before v15).
The fly in the ointment is that ALTER SEQUENCE SET [UN]LOGGED fails
in binary-upgrade mode, because it needs to assign a new relfilenode
which we cannot permit in that mode. Thus, trying to pg_upgrade a
database containing a mismatching identity sequence failed.
To fix, add syntax to ADD/ALTER COLUMN GENERATED AS IDENTITY to allow
the sequence's persistence to be set correctly at creation, and use
that instead of ALTER SEQUENCE SET [UN]LOGGED in pg_dump. (I tried to
make SET [UN]LOGGED work without any pg_dump modifications, but that
seems too fragile to be a desirable answer. This way should be
markedly faster anyhow.)
In passing, document the previously-undocumented SEQUENCE NAME option
that pg_dump also relies on for identity sequences; I see no value
in trying to pretend it doesn't exist.
Per bug #18618 from Anthony Hsu.
Back-patch to v15 where we invented this stuff.
Discussion: https://postgr.es/m/18618-d4eb26d669ed110a@postgresql.org
Commit 5bf748b8, which enhanced nbtree ScalarArrayOp execution, made
parallel index scans work with the new design for arrays via explicit
scheduling of primitive index scans. A backend that successfully
scheduled the scan's next primitive index scan saved its backend local
array keys in shared memory. Any backend could pick up the scheduled
primitive scan within _bt_first. This scheme decouples scheduling a
primitive scan from starting the scan (by performing another descent of
the index via a _bt_search call from _bt_first) to make things robust.
The scheme had a deadlock hazard, at least when the leader process
participated in the scan. _bt_parallel_seize had a code path that made
backends that were not in an immediate position to start a scheduled
primitive index scan wait for some other backend to do so instead.
Under the right circumstances, the leader process could wait here
forever: the leader would wait for any other backend to start the
primitive scan, while every worker was busy waiting on the leader to
consume tuples from the scan's tuple queue.
To fix, don't wait for a scheduled primitive index scan to be started by
some other eligible backend from within _bt_parallel_seize (when the
calling backend isn't in a position to do so itself). Return false
instead, while recording that the scan has a scheduled primitive index
scan in backend local state. This leaves the backend in the same state
as the existing case where a backend schedules (or tries to schedule)
another primitive index scan from within _bt_advance_array_keys, before
calling _bt_parallel_seize. _bt_parallel_seize already handles that
case by returning false without waiting, and without unsetting the
backend local state. Leaving the backend in this state enables it to
start a previously scheduled primitive index scan once it gets back to
_bt_first.
Oversight in commit 5bf748b8, which enhanced nbtree ScalarArrayOp
execution.
Matthias van de Meent, with tweaks by me.
Author: Matthias van de Meent <boekewurm+postgres@gmail.com>
Reported-By: Tomas Vondra <tomas@vondra.me>
Reviewed-By: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-WzmMGaPa32u9x_FvEbPTUkP5e95i=QxR8054nvCRydP-sw@mail.gmail.com
Backpatch: 17-, where nbtree SAOP execution was enhanced.
In existing releases of libxml2, xmlXPathCompile can be driven
to stack overflow because it fails to protect itself against
too-deeply-nested input. While there is an upstream fix as of
yesterday, it will take years for that to propagate into all
shipping versions. In the meantime, we can protect our own
usages basically for free by calling xmlXPathCtxtCompile instead.
(The actual bug is that libxml2 keeps its nesting counter in the
xmlXPathContext, and its parsing code was willing to just skip
counting nesting levels if it didn't have a context. So if we supply
a context, all is well. It seems odd actually that it works at all
to not supply a context, because this means that XPath parsing does
not have access to XML namespace info. Apparently libxml2 never
checks namespaces until runtime? Anyway, this seems like good
future-proofing even if its only immediate effect is to dodge a bug.)
Sadly, this hack only offers protection with libxml2 2.9.11 and newer.
Before that there are multiple similar problems, so if you are
processing untrusted XML it behooves you to get a newer version.
But we have some pretty old libxml2 in the buildfarm, so it seems
impractical to add a regression test to verify this fix.
Per bug #18617 from Jingzhou Fu. Back-patch to all supported
versions.
Discussion: https://postgr.es/m/18617-1cee4d2ed1f4e7ae@postgresql.org
Discussion: https://gitlab.gnome.org/GNOME/libxml2/-/issues/799
Historically we've used timezone "PST8PDT", but the recent release
2024b of tzdb changes the definition of that zone in a way that
breaks many test cases concerned with dates before 1970. Although
we've not yet adopted 2024b into our own tree, this is already
problematic for people using --with-system-tzdata if their platform
has already adopted 2024b. To work with both older and newer
versions of tzdb, switch to using "America/Los_Angeles", accepting
the ensuing changes in regression test results.
Back-patch to all supported branches.
Per report and patch from Wolfgang Walther.
Discussion: https://postgr.es/m/0a997455-5aba-4cf2-a354-d26d8bcbfae6@technowledgy.de
When we are building a hash index that is large enough to need
pre-sorting (larger than either maintenance_work_mem or NBuffers),
the initial sorting phase is interruptible, but the insertion
phase wasn't. Add the missing CHECK_FOR_INTERRUPTS().
Per bug #18616 from Alexander Lakhin. Back-patch to all
supported branches.
Pavel Borisov
Discussion: https://postgr.es/m/18616-acbb9e5caf41e964@postgresql.org
Discussion of commit ed055d249 revealed that we don't actually
want jsonpath's .string() method to depend on DateStyle, nor
TimeZone either, because the non-"_tz" jsonpath functions are
supposed to be immutable. Potentially we could allow a TimeZone
dependency in the "_tz" variants, but it seems better to just
uniformly define this method as returning the same string that
jsonb text output would do. That's easier to implement too,
saving a couple dozen lines.
Patch by me, per complaint from Peter Eisentraut. Back-patch
to v17 where this feature came in (in 66ea94e8e). Also
back-patch ed055d249 to provide test cases.
Discussion: https://postgr.es/m/5e8879d0-a3c8-4be2-950f-d83aa2af953a@eisentraut.org
Currently, when WITH CONDITIONAL WRAPPER is specified, array wrappers
are applied even to a single SQL/JSON item if it is a scalar JSON
value, but this behavior does not comply with the standard.
To fix, apply wrappers only when there are multiple SQL/JSON items
in the result.
Reported-by: Peter Eisentraut <peter@eisentraut.org>
Author: Peter Eisentraut <peter@eisentraut.org>
Author: Amit Langote <amitlangote09@gmail.com>
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Discussion: https://postgr.es/m/8022e067-818b-45d3-8fab-6e0d94d03626%40eisentraut.org
Backpatch-through: 17
check_agglevels_and_constraints() asserted that if we find an
aggregate function in an EXPR_KIND_FROM_SUBSELECT expression, the
expression must be in a LATERAL subquery. Alexander Lakhin found a
case where that's not so: because of the odd scoping rules for NEW/OLD
within a rule, a reference to NEW/OLD could cause an aggregate to be
considered top-level even though it's in an unmarked sub-select.
The error message that would be thrown seems sufficiently on-point,
so just remove the Assert. (Hence, this is not a bug for production
builds.)
This Assert was added by me in commit eaccfded9 (9.3 era). It looks
like I put it in to cross-check that the new logic for detecting
misplaced aggregates (using agglevelsup) caught the same cases that a
previous check on p_lateral_active did. So there might have been some
related misbehavior before eaccfded9 ... but that's very ancient
history by now, so I didn't dig any deeper.
Per bug #18608 from Alexander Lakhin. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/18608-48de0717508ee429@postgresql.org
When building a JSON object, the code builds a hash table of keys, to
allow checking if the keys are unique. The uniqueness check and adding
the new key happens in json_unique_check_key(), but this assumes the
pointer to the key remains valid.
Unfortunately, two places passed pointers to keys in a buffer, while
also appending more data (additional key/value pairs) to the buffer.
With enough data the buffer is resized by enlargeStringInfo(), which
calls repalloc(), invalidating the earlier key pointers.
Due to this the uniqueness check may fail with both false negatives and
false positives, producing JSON objects with duplicate keys or failing
to produce a perfectly valid JSON object.
This affects multiple functions that enforce uniqueness of keys, all
introduced in PG16 with the new SQL/JSON:
- json_object_agg_unique / jsonb_object_agg_unique
- json_object / jsonb_objectagg
Existing regression tests did not detect the issue, simply because the
initial buffer size is 1024 and the objects were small enough not to
require the repalloc.
With a sufficiently large object, AddressSanitizer reported the access
to invalid memory immediately. So would valgrind, of course.
Fixed by copying the key into the hash table memory context, and adding
regression tests with enough data to repalloc the buffer. Backpatch to
16, where the functions were introduced.
Reported by Alexander Lakhin. Investigation and initial fix by Junwang
Zhao, with various improvements and tests by me.
Reported-by: Alexander Lakhin
Author: Junwang Zhao, Tomas Vondra
Backpatch-through: 16
Discussion: https://postgr.es/m/18598-3279ed972a2347c7@postgresql.org
Discussion: https://postgr.es/m/CAEG8a3JjH0ReJF2_O7-8LuEbO69BxPhYeXs95_x7+H9AMWF1gw@mail.gmail.com
We must drop whitespace while parsing the input, else libxml2
will include "blank" nodes that interfere with the desired
indentation behavior. The end result is that we didn't indent
nodes separated by whitespace.
Also, it seems that libxml2 may add a trailing newline when working
in DOCUMENT mode. This is semantically insignificant, so strip it.
This is in the gray area between being a bug fix and a definition
change. However, the INDENT option is still pretty new (since v16),
so I think we can get away with changing this in stable branches.
Hence, back-patch to v16.
Jim Jones
Discussion: https://postgr.es/m/872865a8-548b-48e1-bfcd-4e38e672c1e4@uni-muenster.de
When the ON ERROR / ON EMPTY behavior is to return NULL, returning
NULL directly from ExecEvalJsonExprPath() suffices. Therefore, there's
no need to create separate steps to check the error/empty flag or
those to evaluate the the constant NULL expression. This speeds up
common cases because the default ON ERROR / ON EMPTY behavior for
JSON_QUERY() and JSON_VALUE() is to return NULL. However, these steps
are necessary if the RETURNING type is a domain, as constraints on the
domain may need to be checked.
Reported-by: Jian He <jian.universality@gmail.com>
Author: Jian He <jian.universality@gmail.com>
Author: Amit Langote <amitlangote09@gmail.com>
Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com
Backpatch-through: 17
As introduced by f9900df5f94, a REINDEX CONCURRENTLY job done for an
index with predicates or expressions would set PROC_IN_SAFE_IC in its
MyProc->statusFlags, causing it to be ignored by other concurrent
operations.
Such concurrent index rebuilds should never be ignored, as a predicate
or an expression could call a user-defined function that accesses a
different table than the table where the index is rebuilt.
A test that uses injection points is added, backpatched down to 17.
Michail has proposed a different test, but I have added something
simpler with more coverage.
Oversight in f9900df5f949.
Author: Michail Nikolaev
Discussion: https://postgr.es/m/CANtu0oj9A3kZVduFTG0vrmGnKB+DCHgEpzOp0qAyOgmks84j0w@mail.gmail.com
Backpatch-through: 14
pg_stat_get_io() applied TimestampTzGetDatum twice to the
stat_reset_timestamp value. On 64-bit builds that's harmless because
TimestampTzGetDatum is a no-op, but on 32-bit builds it results in
displaying garbage in the stats_reset column of the pg_stat_io view.
Bug dates to commit a9c70b46d which introduced pg_stat_io, so
back-patch to v16 where that came in.
Bertrand Drouvot
Discussion: https://postgr.es/m/Ztrd+XcPTz1zorkg@ip-10-97-1-34.eu-west-3.compute.internal
Use EMPTY ARRAY instead of EMPTY.
This change does not affect the runtime behavior of JSON_TABLE(),
which continues to return an empty relation ON ERROR. It only alters
whether the default ON ERROR behavior is shown in the deparsed output.
Reported-by: Jian He <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com
Backpatch-through: 17
The deparsing code in get_json_expr_options() unnecessarily emitted
the default column-specific ON ERROR / EMPTY behavior when the
top-level ON ERROR behavior in JSON_TABLE was set to ERROR. Fix that
by not overriding the column-specific default, determined based on
the column's JsonExprOp in get_json_table_columns(), with
JSON_BEHAVIOR_ERROR when that is the top-level ON ERROR behavior.
Note that this only removes redundancy; the current deparsing output
is not incorrect, just redundant.
Reviewed-by: Jian He <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com
Backpatch-through: 17
When the ON ERROR / ON EMPTY behavior is to return NULL, returning
NULL directly from ExecEvalJsonExprPath() suffices. Therefore, there's
no need to create separate steps to check the error/empty flag or
those to evaluate the the constant NULL expression. This speeds up
common cases because the default ON ERROR / ON EMPTY behavior for
JSON_QUERY() and JSON_VALUE() is to return NULL. However, these steps
are necessary if the RETURNING type is a domain, as constraints on the
domain may need to be checked.
Reported-by: Jian He <jian.universality@gmail.com>
Author: Jian He <jian.universality@gmail.com>
Author: Amit Langote <amitlangote09@gmail.com>
Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com
Backpatch-through: 17
Use EMPTY ARRAY instead of EMPTY.
This change does not affect the runtime behavior of JSON_TABLE(),
which continues to return an empty relation ON ERROR. It only alters
whether the default ON ERROR behavior is shown in the deparsed output.
Reported-by: Jian He <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com
Backpatch-through: 17
The deparsing code in get_json_expr_options() unnecessarily emitted
the default column-specific ON ERROR / EMPTY behavior when the
top-level ON ERROR behavior in JSON_TABLE was set to ERROR. Fix that
by not overriding the column-specific default, determined based on
the column's JsonExprOp in get_json_table_columns(), with
JSON_BEHAVIOR_ERROR when that is the top-level ON ERROR behavior.
Note that this only removes redundancy; the current deparsing output
is not incorrect, just redundant.
Reviewed-by: Jian He <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com
Backpatch-through: 17
Since commit 2549f0661, we reject an identifier immediately following
a numeric literal (without separating whitespace), because that risks
ambiguity with hex/octal/binary integers. However, that patch used
token patterns like "{integer}{ident_start}", which is problematic
because {ident_start} matches only a single byte. If the first
character after the integer is a multibyte character, this ends up
with flex reporting an error message that includes a partial multibyte
character. That can cause assorted bad-encoding problems downstream,
both in the report to the client and in the postmaster log file.
To fix, use {identifier} not {ident_start} in the "junk" token
patterns, so that they will match complete multibyte characters.
This seems generally better user experience quite aside from the
encoding problem: for "123abc" the error message will now say that
the error appeared at or near "123abc" instead of "123a".
While at it, add some commentary about why these patterns exist
and how they work.
Report and patch by Karina Litskevich; review by Pavel Borisov.
Back-patch to v15 where the problem came in.
Discussion: https://postgr.es/m/CACiT8iZ_diop=0zJ7zuY3BXegJpkKK1Av-PU7xh0EDYHsa5+=g@mail.gmail.com
This term was using an inconsistent casing between the code and the
documentation, using "CommitTsSLRU" in wait_event_names.txt and
"CommitTSSLRU" in the code.
Let's update the term in the code to reflect what's in the
documentation, "CommitTs" being more commonly used, so as
pg_stat_activity shows the same term as the documentation.
Oversight in 53c2a97a9266.
Author: Alexander Lakhin
Discussion: https://postgr.es/m/f7e514cf-2446-21f1-a5d2-8c089a6e2168@gmail.com
Backpatch-through: 17
These tests depend on the test module injection_points to be installed,
but it may not be available as the contents of src/test/modules/ are not
installed by default.
This commit adds a workaround based on a scan of pg_available_extensions
to check if the extension is available, skipping the test if it is not.
This allows installcheck to work transparently.
There are more tests impacted by this problem on HEAD, but for now this
addresses only the tests that exist on HEAD and v17 as the release is
close by.
Reported-by: Maxim Orlov
Discussion: https://postgr.es/m/CACG=ezZkoT-pFz6a9XnyToiuR-Wg8fGELqHLoyBodr+2h-77qA@mail.gmail.com
Backpatch-through: 17
This is confusing, as it exports twice the same variable. Oversight in
6782709df81f that has spread in more places afterwards.
Reported-by: Alvaro Herrera, Tom Lane
Discussion: https://postgr.es/m/202408201630.mn6vbohjh7hh@alvherre.pgsql
Backpatch-through: 17
/usr/bin/msgfmt: po/fr.po: warning: PO file header fuzzy
warning: older versions of msgfmt will give an error on this
Apparently, not all versions of msgfmt produce this. Quick fix for
now, more to be researched later.
Commit b5a9b18c introduced block streaming infrastructure with a special
fast path for all-cached scans, and commit b7b0f3f2 connected the
infrastructure up to sequential scans. One of the fast path
micro-optimizations had an unintended consequence: it interfered with
parallel sequential scan's block range allocator (from commit 56788d21),
which has its own ramp-up and ramp-down algorithm when handing out
groups of pages to workers. A scan of an all-cached table could give
extra blocks to one worker, when others had finished. In some plans
(probably already very bad plans, such as the one reported by
Alexander), the unfairness could be magnified.
An internal buffer of 16 block numbers is removed, keeping just a single
block buffer for technical reasons.
Back-patch to 17.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/63a63690-dd92-c809-0b47-af05459e95d1%40gmail.com
The first test was sensitive to the insert LSN after setting up the
catalogs, which depended on environmental things like the locales on the
OS and usernames. Switch to a new WAL file before the first test, as a
simple way to put every computer into the same state.
Back-patch to all supported releases.
Reported-by: Anton Voloshin <a.voloshin@postgrespro.ru>
Reported-by: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://postgr.es/m/b26aeac2-cb6d-4633-a7ea-945baae83dcf%40postgrespro.ru
This change improves the description of the
restrict_nonsystem_relation_kind parameter in guc_table.c and the
documentation for better clarity.
Backpatch to 12, where this GUC parameter was introduced.
Reviewed-by: Peter Eisentraut
Discussion: https://postgr.es/m/6a96f1af-22b4-4a80-8161-1f26606b9ee2%40eisentraut.org
Backpatch-through: 12
Commit 2489d76c4 removed some logic from pullup_replace_vars()
that avoided wrapping a PlaceHolderVar around a pulled-up
subquery output expression if the expression could be proven
to go to NULL anyway (because it contained Vars or PHVs of the
pulled-up relation and did not contain non-strict constructs).
But removing that logic turns out to cause performance regressions
in some cases, because the extra PHV blocks subexpression folding,
and will do so even if outer-join reduction later turns it into a
no-op with no phnullingrels bits. This can for example prevent
an expression from being matched to an index.
The reason for always adding a PHV was to ensure we had someplace
to put the varnullingrels marker bits of the Var being replaced.
However, it turns out we can optimize in exactly the same cases that
the previous code did, because we can instead attach the needed
varnullingrels bits to the contained Var(s)/PHV(s).
This is not a complete solution --- it would be even better if we
could remove PHVs after reducing them to no-ops. It doesn't look
practical to back-patch such an improvement, but this change seems
safe and at least gets rid of the performance-regression cases.
Per complaint from Nikhil Raj. Back-patch to v16 where the
problem appeared.
Discussion: https://postgr.es/m/CAG1ps1xvnTZceKK24OUfMKLPvDP2vjT-d+F2AOCWbw_v3KeEgg@mail.gmail.com
If an ORDER BY item in SELECT is a bare identifier, the parser
first seeks it as an output column name of the SELECT (for SQL92
compatibility). However, ruleutils.c is expecting the SQL99
interpretation where such a name is an input column name. So it's
possible to produce an incorrect display of a view in the (admittedly
pretty ill-advised) case where some other column is renamed in the
SELECT output list to match an ORDER BY column.
This can be fixed by table-qualifying such names in the dumped
view text. To avoid cluttering less-ill-advised queries, we'd
like to do so only when there's an actual name conflict.
That requires passing the current get_query_def call's resultDesc
parameter down to get_variable, so that it can determine what
the output column names are. In hopes of reducing rather than
increasing notational clutter in ruleutils.c, I moved that value
into the deparse_context struct and removed it from the parameter
lists of get_query_def's other subroutines.
I made a few other cosmetic changes while at it:
* Likewise move the colNamesVisible parameter into deparse_context.
* Rename deparse_context's windowTList field to targetList,
since it's no longer used only in connection with WINDOW clauses.
* Replace the special_exprkind field with a bool inGroupBy,
since that was all it was being used for, and the apparent
flexibility of storing a ParseExprKind proved to be illusory.
(We need a separate varInOrderBy field to make this patch work.)
* Remove useless save/restore logic in get_select_query_def.
In principle, this bug is quite old. However, it seems unreachable
before 1b4d280ea, because before that the presence of "new" and "old"
entries in a view's rangetable caused us to always table-qualify every
Var reference in dumped views. Hence, back-patch to v16 where that
came in.
Per bug #18589 from Quynh Tran.
Discussion: https://postgr.es/m/18589-70091cb81db1a3f1@postgresql.org