Presently, when this function is called for an unlogged sequence on
a standby server, it will error out with a message like
ERROR: could not open file "base/5/16388": No such file or directory
Since the pg_sequences system view uses pg_sequence_last_value(),
it can error similarly. To fix, modify the function to return NULL
for unlogged sequences on standby servers. Since this bug is
present on all versions since v15, this approach is preferable to
making the ERROR nicer because we need to repair the pg_sequences
view without modifying its definition on released versions. For
consistency, this commit also modifies the function to return NULL
for other sessions' temporary sequences. The pg_sequences view
already appropriately filters out such sequences, so there's no bug
there, but we might as well offer some defense in case someone
invokes this function directly.
Unlogged sequences were first introduced in v15, but temporary
sequences are much older, so while the fix for unlogged sequences
is only back-patched to v15, the temporary sequence portion is
back-patched to all supported versions.
We could also remove the privilege check in the pg_sequences view
definition in v18 if we modify this function to return NULL for
sequences for which the current user lacks privileges, but that is
left as a future exercise for when v18 development begins.
Reviewed-by: Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/20240501005730.GA594666%40nathanxps13
Backpatch-through: 12
The catalog view pg_stats_ext fails to consider privileges for
expression statistics. The catalog view pg_stats_ext_exprs fails
to consider privileges and row-level security policies. To fix,
restrict the data in these views to table owners or roles that
inherit privileges of the table owner. It may be possible to apply
less restrictive privilege checks in some cases, but that is left
as a future exercise. Furthermore, for pg_stats_ext_exprs, do not
return data for tables with row-level security enabled, as is
already done for pg_stats_ext.
On the back-branches, a fix-CVE-2024-4317.sql script is provided
that will install into the "share" directory. This file can be
used to apply the fix to existing clusters.
Bumps catversion on 'master' branch only.
Reported-by: Lukas Fittl
Reviewed-by: Noah Misch, Tomas Vondra, Tom Lane
Security: CVE-2024-4317
Backpatch-through: 14
94985c210 added code to detect when WindowFuncs were monotonic and
allowed additional quals to be "pushed down" into the subquery to be
used as WindowClause runConditions in order to short-circuit execution
in nodeWindowAgg.c.
The Node representation of runConditions wasn't well selected and
because we do qual pushdown before planning the subquery, the planning
of the subquery could perform subquery pull-up of nested subqueries.
For WindowFuncs with args, the arguments could be changed after pushing
the qual down to the subquery.
This was made more difficult by the fact that the code duplicated the
WindowFunc inside an OpExpr to include in the WindowClauses runCondition
field. This could result in duplication of subqueries and a pull-up of
such a subquery could result in another initplan parameter being issued
for the 2nd version of the subplan. This could result in errors such as:
ERROR: WindowFunc not found in subplan target lists
Here in the backbranches, we don't have the flexibility to improve the
Node representation to resolve this, so instead we just disable the
runCondition optimization for ntile() unless the argument is a Const,
(v16 only) and likewise for count(expr) (both v15 and v16). count(*) is
unaffected. All other window functions which support this optimization
all take zero arguments and therefore are unaffected.
Bug: #18170
Reported-by: Zuming Jiang
Discussion: https://postgr.es/m/18170-f1d17bf9a0d58b24@postgresql.org
Backpatch-through 15 (master will be fixed independently)
A parallel worker's buffer usage is accumulated to its pgBufferUsage
and then is accumulated into the leader's one at the end of the
parallel vacuum. However, since the leader process used to use
dedicated VacuumPage{Hit, Miss, Dirty} globals for the buffer usage
reporting, the worker's buffer usage was not included, leading to an
incorrect buffer usage report.
To fix the problem, this commit makes vacuum use pgBufferUsage
instruments for buffer usage reporting instead of VacuumPage{Hit,
Miss, Dirty} globals. These global variables are still used by ANALYZE
command and autoanalyze.
This also fixes the buffer usage report of vacuuming on temporary
tables, since the buffers dirtied by MarkLocalBufferDirty() were not
tracked by the VacuumPageDirty variable.
Parallel vacuum was introduced in 13, but the buffer usage reporting
for VACUUM command with the VERBOSE option was implemented in
15. So backpatch to 15.
Reported-by: Anthonin Bonnefoy
Author: Anthonin Bonnefoy
Reviewed-by: Alena Rybakina, Masahiko Sawada
Discussion: https://postgr.es/m/CAO6_XqrQk+QZQcYs_C6nk0cMfHuUWk85vT9CrcA1NffFbAVE2A@mail.gmail.com
Backpatch-through: 15
As an optimization, we store "name" columns as cstrings in btree
indexes.
Here we modify it so that Index Only Scans convert these cstrings back
to names with NAMEDATALEN bytes rather than storing the cstring in the
tuple slot, as was happening previously.
Bug: #17855
Reported-by: Alexander Lakhin
Reviewed-by: Alexander Lakhin, Tom Lane
Discussion: https://postgr.es/m/17855-5f523e0f9769a566@postgresql.org
Backpatch-through: 12, all supported versions
vac_update_datfrozenxid() did multiple loads of relfrozenxid and
relminmxid from buffer memory, and it assumed each would get the same
value. Not so if a concurrent vac_update_relstats() did an inplace
update. Commit 2d2e40e3be fixed the same
kind of bug in vac_truncate_clog(). Today's bug could cause the
rel-level field and XIDs in the rel's rows to precede the db-level
field. A cluster having such values should VACUUM affected tables.
Back-patch to v12 (all supported versions).
Discussion: https://postgr.es/m/20240423003956.e7.nmisch@google.com
ALTER COLUMN TYPE wasn't expecting to find any pg_proc objects
depending on the column whose type is to be altered. That indeed
wasn't possible when this code was written, but it is possible
since we introduced new-style SQL function bodies.
It's about as difficult to fix this case as it is to fix dependent
views, and we've been punting on those for years, so I don't feel
too awful about punting for functions too. (I sure wouldn't risk
back-patching such code.) So just throw a more user-facing error.
Also, adjust some of the existing comments to reflect that these
are all pretty much the same issue.
(This patch also fixes it so we will tolerate finding such a
dependency during ALTER COLUMN SET EXPRESSION; in that, we need
not do anything to the function, so no error is wanted. That
problem is new in HEAD.)
Per bug #18449 from Alexander Lakhin. Back-patch to v14 where
we added new-style SQL functions.
Discussion: https://postgr.es/m/18449-f8248467aaa294d5@postgresql.org
In commit 25cd2d640 I (tgl) opined that "The additions of the months
and microseconds fields could also overflow, of course. However,
I believe we need no additional checks there; the existing range
checks should catch such cases". This is demonstrably wrong however
for the microseconds field, and given that discovery it seems prudent
to be paranoid about the months addition as well.
Report and patch by Joseph Koshakow. As before, back-patch to all
supported branches. (However, the test case doesn't work before
v15 because we didn't allow wider-than-int32 numbers in interval
literals. A variant test could probably be built that fits within
that restriction, but it didn't seem worth the trouble.)
Discussion: https://postgr.es/m/CAAvxfHf77sRHKoEzUw9_cMYSpbpNS2C+J_+8Dq4+0oi8iKopeA@mail.gmail.com
In the wake of commits dac048f71 and ecaf7c5df, `make headerscheck`
no longer generated all headers that are included by other headers,
causing headerscheck/cpluspluscheck to fail. To fix, backpatch enough
makefile rules from 721856ff2 to generate all required headers.
Reported by Marina Polyakova
Backpatch to version 16 only, as the issue is not present on master
Discussion: https://postgr.es/m/231ea1127719b2b3d6d1c05f75808981%40postgrespro.ru
We missed performing table sync if the invalidation happened while the
non-ready tables list was being prepared. This occurs because the sync
state was set to valid at the end of non-ready table list preparation
irrespective of the invalidations processed while the list is being
prepared.
Fix it by changing the boolean variable to a tri-state enum and by setting
table state to valid only if no invalidations have occurred while the list
is being prepared.
Reprted-by: Alexander Lakhin
Diagnosed-by: Alexander Lakhin
Author: Vignesh C
Reviewed-by: Hou Zhijie, Alexander Lakhin, Ajin Cherian, Amit Kapila
Backpatch-through: 15
Discussion: https://postgr.es/m/711a6afe-edb7-1211-cc27-1bef8239eec7@gmail.com
When specifying the createdb strategy, the documentation suggests valid
options are FILE_COPY and WAL_LOG, but the code does case-sensitive
comparison and accepts only "file_copy" and "wal_log" as valid.
Fixed by doing a case-insensitive comparison using pg_strcasecmp(), same
as for other string parameters nearby.
While at it, apply fmtId() to a nearby "locale_provider". This already
did the comparison in case-insensitive way, but the value would not be
double-quoted, confusing the parser and the error message.
Backpatch to 15, where the strategy was introduced.
Backpatch-through: 15
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/90c6913a-1dd2-42b4-8365-ce3b09c39b17@enterprisedb.com
When building a join clause derived from an EquivalenceClass, if the
clause is to be used with an appendrel child relation then make sure
its clause_relids include the relids of that child relation.
Normally this would be true already because the EquivalenceMember
would be a Var of that relation. However, if the appendrel represents
a flattened UNION ALL construct then some child EquivalenceMembers
could be constants with no relids. The resulting under-marked clause
is problematic because it could mislead join_clause_is_movable_into
about where the clause should be evaluated. We do not have an example
showing incorrect plan generation, but there are existing cases in
the regression tests that will fail the Asserts this patch adds to
get_baserel_parampathinfo. A similarly wrong conclusion about a
clause being considered by get_joinrel_parampathinfo would lead to
wrong placement of the clause. (This also squares with the way
that clause_relids is calculated for non-equijoin clauses in
adjust_appendrel_attrs.)
The other reason for wanting these new Asserts is that the previous
blithe assumption that the results of generate_join_implied_equalities
"necessarily satisfy join_clause_is_movable_into" turns out to be
wrong pre-v16. If it's still wrong it'd be good to find out.
Per bug #18429 from Benoît Ryder. The bug as filed was fixed by
commit 2489d76c4, but these changes correlate with the fix we
will need to apply in pre-v16 branches.
Discussion: https://postgr.es/m/18429-8982d4a348cc86c6@postgresql.org
Commit 2ed8f9a01 intended to institute a policy that if a
RangeTblFunction has a coldeflist, then the function return type is
certainly RECORD, and we should use the coldeflist as the source of
truth about what the columns of the record type are. When the
original function has been folded to a constant, inspection of the
constant might give a different answer. This situation will lead to
a tuple-type-mismatch error at execution, but up until that point we
need to consistently believe the coldeflist, or we'll have problems
from different bits of code reaching different conclusions.
expandRTE didn't get that memo though, and would try to produce a
tupdesc based on the constant in this situation, leading to an
assertion failure. (Desultory testing suggests that non-assert
builds often manage to give the expected error, although I also
saw a "cache lookup failed for type 0" error, and it seems at
least possible that a crash could happen.)
Some other callers of get_expr_result_type and get_expr_result_tupdesc
were also being incautious about this. While none of them seem to
have actual bugs, they're working harder than necessary in this case,
besides which it seems safest to have an explicit policy of not using
those functions on an RTE with a coldeflist. Adjust the code
accordingly, and add commentary to funcapi.c about this policy.
Also fix an obsolete comment that claimed "get_expr_result_type()
doesn't know how to extract type info from a RECORD constant".
That hasn't been true since commit d57534740.
Per bug #18422 from Alexander Lakhin.
As with the previous commit, back-patch to all supported branches.
Discussion: https://postgr.es/m/18422-89ca86c8eac5246d@postgresql.org
Commit 6bcda4a721 replaced PG_DETOAST_DATUM with PG_DETOAST_DATUM_PACKED
in two BRIN output functions, for minmax-multi and bloom opclasses. But
this is incorrect - the code is accessing the data through structs that
already include a 4B header, so the detoast needs to match that. But the
PACKED macro may keep the 1B header, which means the struct fields will
point to incorrect data.
Backpatch-through: 16
Discussion: https://postgr.es/m/1df00a66-db5a-4e66-809a-99b386a06d86%40enterprisedb.com
Properly update the number of bits set in the bitmap after merging the
filters in brin_bloom_union.
This is mostly harmless, as the counter is used only in the output
function, which means pageinspect may show incorrect information about
the BRIN summary. The counter does not affect correctness.
Discovered while adding a regression test comparing indexes built with
and without parallelism. The parallel index builds exercise the union
procedure when merging results from workers, which is otherwise very
hard to do in a test. Which is why this went unnoticed until now.
Backpatch through 14, where the BRIN bloom opclasses were introduced.
Backpatch-through: 14
Discussion: https://postgr.es/m/1df00a66-db5a-4e66-809a-99b386a06d86%40enterprisedb.com
GetPageWithFreeSpace() callers assume the returned block exists in the
main fork, failing with "could not read block" errors if that doesn't
hold. Make that assumption reliable now. It hadn't been guaranteed,
due to the weak WAL and data ordering of participating components. Most
operations on the fsm fork are not WAL-logged. Relation extension is
not WAL-logged. Hence, an fsm-fork block on disk can reference a
main-fork block that no WAL record has initialized. That could happen
after an OS crash, a replica promote, or a PITR restore. wal_log_hints
makes the trouble easier to hit; a replica promote or PITR ending just
after a relevant fsm-fork FPI_FOR_HINT may yield this broken state. The
v16 RelationAddBlocks() mechanism also makes the trouble easier to hit,
since it bulk-extends even without extension lock waiters. Commit
917dc7d239 stopped trouble around
truncation, but vectors involving PageIsNew() pages remained.
This implementation adds a RelationGetNumberOfBlocks() call when the
cached relation size doesn't confirm a block exists. We've been unable
to identify a benchmark that slows materially, but this may show up as
additional time in lseek(). An alternative without that overhead would
be a new ReadBufferMode such that ReadBufferExtended() returns NULL
after a 0-byte read, with all other errors handled normally. However,
each GetFreeIndexPage() caller would then need code for the return-NULL
case. Back-patch to v14, due to earlier versions not caching relation
size and the absence of a pre-v16 problem report.
Ronan Dunklau. Reported by Ronan Dunklau.
Discussion: https://postgr.es/m/1878547.tdWV9SEqCh%40aivenlaptop
This function would have the same issue we solved in commit 501cfd07d:
If an error is thrown after calling CreateWaitEventSet(), the file
descriptor (on epoll- or kqueue-based systems) or handles (on Windows)
that the WaitEventSet contains are leaked.
Like that commit, use PG_TRY-PG_FINALLY (PG_TRY-PG_CATCH in v12) to make
sure the WaitEventSet is freed properly.
Back-patch to all supported versions, but as we do not have this issue
in HEAD (cf. commit 50c67c201), no need to apply this patch to it.
Discussion: https://postgr.es/m/CAPmGK16MqdDoD8oatp8SQWaEa4vS3nfQqDN_Sj9YRuu5J3Lj9g%40mail.gmail.com
Commit 72559438 started copying more attributes from AttributeTemplate
to the functions we generate on the fly. In the case of deform
functions, which return void, this meant that "noundef", from
AttributeTemplate's return value (a Datum) was copied to a void type.
Older LLVM releases were OK with that, but LLVM 18 crashes.
Update our llvm_copy_attributes() function to skip copying the attribute
for the return value, if the target function returns void.
Thanks to Dmitry Dolgov for help chasing this down.
Back-patch to all supported releases, like 72559438.
Reported-by: Pavel Stehule <pavel.stehule@gmail.com>
Reviewed-by: Dmitry Dolgov <9erthalion6@gmail.com>
Discussion: https://postgr.es/m/CAFj8pRACpVFr7LMdVYENUkScG5FCYMZDDdSGNU-tch%2Bw98OxYg%40mail.gmail.com
No configured-by-FDW events would result in "return" directly out of a
PG_TRY block, making the exception stack dangling. Repair.
Oversight in commit 501cfd07d; back-patch to v14, like that commit, but
as we do not have this issue in HEAD (cf. commit 50c67c201), no need to
apply this patch to it.
In passing, improve a comment about the handling of in-process requests
in a postgres_fdw.c function called from this function.
Alexander Pyhalov, with comment adjustment/improvement by me.
Discussion: https://postgr.es/m/425fa29a429b21b0332737c42a4fdc70%40postgrespro.ru
If temp tables have dependencies (such as sequences) then it's
possible for autovacuum's cleanup of orphan temp tables to deadlock
against an incoming backend that's trying to clean out the temp
namespace for its own use. That can happen because RemoveTempRelations'
performDeletion call can visit objects within the namespace in
an order different from the order in which a per-table deletion
will visit them.
To fix, observe that performDeletion will begin by taking an exclusive
lock on the temp namespace (even though it won't actually delete it).
So, if we can get a shared lock on the namespace, we can be sure we're
not running concurrently with RemoveTempRelations, while also not
conflicting with ordinary use of the namespace. This requires
introducing a conditional version of LockDatabaseObject, but that's no
big deal. (It's surprising we've got along without that this long.)
Report and patch by Mikhail Zhilin. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/c43ce028-2bc2-4865-9b89-3f706246eed5@postgrespro.ru
When a plain aggregate is used as a window function, and the window
frame start is specified as UNBOUNDED PRECEDING, the frame's head
cannot move so we do not need to use moving-aggregate mode. The check
for that was put into initialize_peragg(), failing to notice that
ExecInitWindowAgg() calls that function before it's filled in
winstate->frameOptions. Since makeNode() would have zeroed the field,
this didn't provoke uninitialized-value complaints, nor would the
erroneous decision have resulted in more than a little inefficiency.
Still, it's wrong, so move the initialization of
winstate->frameOptions earlier to make it work properly.
While here, also fix a thinko in a comment. Both errors crept in in
commit a9d9acbf2 which introduced the moving-aggregate mode.
Spotted by Vallimaharajan G. Back-patch to all supported branches.
Discussion: https://postgr.es/m/18e7f2a5167.fe36253866818.977923893562469143@zohocorp.com
Ordinary ALTER TABLE SET SCHEMA will also move any owned sequences
into the new schema. We failed to do likewise for foreign tables,
because AlterTableNamespaceInternal believed that only certain
relkinds could have indexes, owned sequences, or constraints.
We could simply add foreign tables to that relkind list, but it
seems likely that the same oversight could be made again in
future. Instead let's remove the relkind filter altogether.
These functions shouldn't cost much when there are no objects
that they need to process, and surely this isn't an especially
performance-critical case anyway.
Per bug #18407 from Vidushi Gupta. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/18407-4fd07373d252c6a0@postgresql.org
The musl dynamic linker saves a pointer to the process' environment
value of LD_LIBRARY_PATH very early in startup. When we move/clobber
the environment to make more room for ps status strings, we clobber
that value and thereby prevent libraries from being found via
LD_LIBRARY_PATH, which breaks the use of a temporary installation
for testing purposes. To fix, stop collecting usable space for
ps status if we notice that the variable we are about to clobber
is LD_LIBRARY_PATH. This will result in some reduction in how long
the ps status can be, but it's only likely to occur in temporary
test contexts, so it doesn't seem like a big problem. In any case,
we don't have to do it if we see we are on glibc, which surely is
where the majority of our Linux testing is done.
Thomas Munro, Bruce Momjian, and Tom Lane, per report from Wolfgang
Walther. Back-patch to all supported branches, with the hope that
we'll set up a buildfarm animal to test on this platform.
Discussion: https://postgr.es/m/fddd1cd6-dc16-40a2-9eb5-d7fef2101488@technowledgy.de
Up to now, all of the "catcache list" objects within a catalog cache
were just chained together on a single dlist, requiring O(N) time to
search. Remarkably, we've not had serious performance problems with
that so far; but we got a complaint of a bad performance regression
from v15 in a case with a large number of roles in the system, which
traced down to O(N^2) total time when we probed N catcache lists.
Replace that data structure with a hashtable having an enlargeable
number of dlists, in an exactly parallel way to the data structure
we've used for years for the plain CatCTup cache members. The extra
cost of maintaining a hash table seems negligible, since we were
already computing a hash value for list searches.
Normally this'd be HEAD-only material, but in view of the performance
regression it seems advisable to back-patch into v16. In the v16
version of the patch, leave the dead cc_lists field where it is and
add the new fields at the end of struct catcache, to avoid possible
ABI breakage in case any external code is looking at these structs.
(We assume no external code is actually allocating new catcache
structs.)
Per report from alex work.
Discussion: https://postgr.es/m/CAGvXd3OSMbJQwOSc-Tq-Ro1CAz=vggErdSG7pv2s6vmmTOLJSg@mail.gmail.com
Given a subplan in a MERGE query, EXPLAIN would sometimes fail to
properly display expressions involving Params referencing variables in
other parts of the plan tree.
This would affect subplans outside the topmost join plan node, for
which expansion of Params would go via the top-level ModifyTable plan
node. The problem was that "inner_tlist" for the ModifyTable node's
deparse_namespace was set to the join node's targetlist, but
"inner_plan" was set to the ModifyTable node itself, rather than the
join node, leading to incorrect results when descending to the
referenced variable.
Fix and backpatch to v15, where MERGE was introduced.
Discussion: https://postgr.es/m/CAEZATCWAv-sZuH%2BwG5xJ-%2BGt7qGNGX8wUQd3XYydMFDKgRB9nw%40mail.gmail.com
Similar to d8a295389, trim off any PathKeys which are for ORDER BY /
DISTINCT aggregate functions from the PathKey List for the Gather Merge
paths created by gather_grouping_paths(). These additional PathKeys are
not valid to use after grouping has taken place as these PathKeys belong
to columns which are inputs to an aggregate function and, therefore are
unavailable after aggregation.
Reported-by: Alexander Lakhin
Discussion: https://postgr.es/m/cf63174c-8c89-3953-cb49-48f41f74941a@gmail.com
Backpatch-through: 16, where 1349d2790 was added
Commit a3c7a993d fixed some cases involving target columns that are
arrays or composites by applying transformAssignedExpr to the VALUES
entries, and then stripping off any assignment ArrayRefs or
FieldStores that the transformation added. But I forgot about domains
over arrays or composites :-(. Such cases would either fail with
surprising complaints about mismatched datatypes, or insert unexpected
coercions that could lead to odd results. To fix, extend the
stripping logic to get rid of CoerceToDomain if it's atop an ArrayRef
or FieldStore.
While poking at this, I realized that there's a poorly documented and
not-at-all-tested behavior nearby: we coerce each VALUES column to
the domain type separately, and rely on the rewriter to merge those
operations so that the domain constraints are checked only once.
If that merging did not happen, it's entirely possible that we'd get
unexpected domain constraint failures due to checking a
partially-updated container value. There's no bug there, but while
we're here let's improve the commentary about it and add some test
cases that explicitly exercise that behavior.
Per bug #18393 from Pablo Kharo. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/18393-65fedb1a0de9260d@postgresql.org
There is a very ancient hack in check_sql_fn_retval that allows a
single SELECT targetlist entry of composite type to be taken as
supplying all the output columns of a function returning composite.
(This is grotty and fundamentally ambiguous, but it's really hard
to do nested composite-returning functions without it.)
As far as I know, that doesn't cause any problems in ordinary
functions. It's disastrous for procedures however. All procedures
that have any output parameters are labeled with prorettype RECORD,
and the CALL code expects it will get back a record with one column
per output parameter, regardless of whether any of those parameters
is composite. Doing something else leads to an assertion failure
or core dump.
This is simple enough to fix: we just need to not apply that rule
when considering procedures. However, that requires adding another
argument to check_sql_fn_retval, which at least in principle might be
getting called by external callers. Therefore, in the back branches
convert check_sql_fn_retval into an ABI-preserving wrapper around a
new function check_sql_fn_retval_ext.
Per report from Yahor Yuzefovich. This has been broken since we
implemented procedures, so back-patch to all supported branches.
Discussion: https://postgr.es/m/CABz5gWHSjj2df6uG0NRiDhZ_Uz=Y8t0FJP-_SVSsRsnrQT76Gg@mail.gmail.com
Commit 387da18874 moved the code to put socket into non-blocking mode
from socket_set_nonblocking() into the one-time initialization
function, pq_init(). In socket_set_nonblocking(), there indeed was a
risk of recursion on failure like the comment said, but in pq_init(),
ERROR or FATAL is fine. There's even another elog(FATAL) just after
this, if setting FD_CLOEXEC fails.
Note that COMMERROR merely logged the error, it did not close the
connection, so if putting the socket to non-blocking mode failed we
would use the connection anyway. You might not immediately notice,
because most socket operations in a regular backend wait for the
socket to become readable/writable anyway. But e.g. replication will
be quite broken.
Backpatch to all supported versions.
Discussion: https://www.postgresql.org/message-id/d40a5cd0-2722-40c5-8755-12e9e811fa3c@iki.fi
It's important for 'all_visible_according_to_vm' to correctly reflect
whether the VM bit is set or not, even when we are not trusting the VM
to skip pages, because contrary to what the comment said,
lazy_scan_prune() relies on it.
If it's incorrectly set to 'false', when the VM bit is in fact set,
lazy_scan_prune() will try to set the VM bit again and dirty the page
unnecessarily. As a result, if you used DISABLE_PAGE_SKIPPING, all
heap pages were dirtied, even if there were no changes. We would also
fail to clear any VM bits that were set incorrectly.
This was broken in commit 980ae17310, so backpatch to v16.
Backpatch-through: 16
Reviewed-by: Melanie Plageman, Peter Geoghegan
Discussion: https://www.postgresql.org/message-id/3df2b582-dc1c-46b6-99b6-38eddd1b2784@iki.fi
For pass-by-reference types, the code added in 0b053e78b, which aimed to
resolve a memory leak, was overly aggressive in resetting the per-tuple
memory context which could result in pfree'd memory being accessed
resulting in failing to find previously cached results in the hash
table.
What was happening was prepare_probe_slot() was switching to the
per-tuple memory context and calling ExecEvalExpr(). ExecEvalExpr() may
have required a memory allocation. Both MemoizeHash_hash() and
MemoizeHash_equal() were aggressively resetting the per-tuple context
and after determining the hash value, the context would have gotten reset
before MemoizeHash_equal() was called. This could have resulted in
MemoizeHash_equal() looking at pfree'd memory.
This is less likely to have caused issues on a production build as some
other allocation would have had to have reused the pfree'd memory to
overwrite it. Otherwise, the original contents would have been intact.
However, this clearly caused issues on MEMORY_CONTEXT_CHECKING builds.
Author: Tender Wang, Andrei Lepikhov
Reported-by: Tender Wang (using SQLancer)
Reviewed-by: Andrei Lepikhov, Richard Guo, David Rowley
Discussion: https://postgr.es/m/CAHewXNnT6N6UJkya0z-jLFzVxcwGfeRQSfhiwA+NyLg-x8iGew@mail.gmail.com
Backpatch-through: 14, where Memoize was added
In OpenSSL 3.0.0 and later, ERR_reason_error_string randomly refuses
to provide a string for error codes representing system errno values
(e.g., "No such file or directory"). There is a poorly-documented way
to extract the errno from the SSL error code in this case, so do that
and apply strerror, rather than falling back to reporting the error
code's numeric value as we were previously doing.
Problem reported by David Zhang, although this is not his proposed
patch; it's instead based on a suggestion from Heikki Linnakangas.
Back-patch to all supported branches, since any of them are likely
to be used with recent OpenSSL.
Discussion: https://postgr.es/m/b6fb018b-f05c-4afd-abd3-318c649faf18@highgo.ca
When an UPDATE or DELETE action in MERGE returns TM_SelfModified,
there are 2 possible causes:
1). The target tuple was already updated or deleted by the current
command. This can happen if the target row joins to more than one
source row, and the SQL standard explicitly says that this must be
an error.
2). The target tuple was already updated or deleted by a later command
in the current transaction. This can happen if the tuple is
modified by a BEFORE trigger or a volatile function used in the
query, and should be an error for the same reason that it is in a
plain UPDATE or DELETE command.
In MERGE's primary error handling block, it failed to check for (2),
causing it to return a misleading error message in such cases.
In the secondary error handling block, following a concurrent update
from another session, it failed to check for (1), causing it to
silently ignore target rows joined to more than one source row,
instead of reporting an error.
Fix this, and add tests for both of these cases.
Per report from Wenjiang Zhang. Back-patch to v15, where MERGE was
introduced.
Discussion: https://postgr.es/m/tencent_41DE0FF443FE14B94A5898D373792109E408%40qq.com
This reverts commit eae7be600b, following a discussion with Tom Lane,
due to concerns that this impacts the decisions made by the planner for
the number of workers spawned based on the inlining and const-folding of
index expressions and predicate for cases that would have worked until
this commit.
Discussion: https://postgr.es/m/162802.1709746091@sss.pgh.pa.us
Backpatch-through: 12
In the corner case where a function returning RECORD has been
simplified to a RECORD constant or an inlined ROW() expression,
ExecInitFunctionScan failed to cross-check the function's result
rowtype against the coldeflist provided by the calling query.
That happened because get_expr_result_type is able to extract a
tupdesc from such expressions, which led ExecInitFunctionScan to
ignore the coldeflist. (Instead, it used the extracted tupdesc
to check the function's output, which of course always succeeds.)
I have not been able to demonstrate any really serious consequences
from this, because if some column of the result is of the wrong
type and is directly referenced by a Var of the calling query,
CheckVarSlotCompatibility will catch it. However, we definitely do
fail to report the case where the function returns more columns than
the coldeflist expects, and in the converse case where it returns
fewer columns, we get an assert failure (but, seemingly, no worse
results in non-assert builds).
To fix, always build the expected tupdesc from the coldeflist if there
is one, and consult get_expr_result_type only when there isn't one.
Also remove the failing Assert, even though it is no longer reached
after this fix. It doesn't seem to be adding anything useful, since
later checking will deal with cases with the wrong number of columns.
The only other place I could find that is doing something similar
is inline_set_returning_function. There's no live bug there because
we cannot be looking at a Const or RowExpr, but for consistency
change that code to agree with ExecInitFunctionScan.
Per report from PetSerAl. After some debate I've concluded that
this should be back-patched. There is a small risk that somebody
has been relying on such a case not throwing an error, but I judge
this outweighed by the risk that I've missed some way in which the
failure to cross-check has worse consequences than sketched above.
Discussion: https://postgr.es/m/CAKygsHSerA1eXsJHR9wft3Gn3wfHQ5RfP8XHBzF70_qcrrRvEg@mail.gmail.com
As coded, the planner logic that calculates the number of parallel
workers to use for a parallel index build uses expressions and
predicates from the relcache, which are flattened for the planner by
eval_const_expressions().
As reported in the bug, an immutable parallel-unsafe function flattened
in the relcache would become a Const, which would be considered as
parallel-safe, even if the predicate or the expressions including the
function are not safe in parallel workers. Depending on the expressions
or predicate used, this could cause the parallel build to fail.
Tests are included that check parallel index builds with parallel-unsafe
predicate and expressions. Two routines are added to lsyscache.h to be
able to retrieve expressions and predicate of an index from its pg_index
data.
Reported-by: Alexander Lakhin
Author: Tender Wang
Reviewed-by: Jian He, Michael Paquier
Discussion: https://postgr.es/m/CAHewXN=UaAaNn9ruHDH3Os8kxLVmtWqbssnf=dZN_s9=evHUFA@mail.gmail.com
Backpatch-through: 12
The error message(s) were reporting the stats kind of 'f', which is not
correct as that's for the "dependencies" statistics kind.
Reported-by: Horst Reiterer
Reviewed-by: Richard Guo
Discussion: https://postgr.es/m/18375-ba99383eb9062d6a@postgresql.org
Backpatch-through: 12, where MCV extended stats were added.
In the case where the target timestamp is before the origin timestamp
and their difference is already an exact multiple of the stride, the
code incorrectly subtracted the stride anyway.
Also detect several integer-overflow cases that previously produced
bogus results. (The submitted patch tried to avoid overflow, but
I'm not convinced it's right, and problematic cases are so far out of
the plausibly-useful range that they don't seem worth sweating over.
Let's just use overflow-detecting arithmetic and throw errors.)
timestamp_bin() and timestamptz_bin() are basically identical and
so had identical bugs. Fix both.
Report and patch by Moaaz Assali, adjusted some by me. Back-patch
to v14 where date_bin() was introduced.
Discussion: https://postgr.es/m/CALkF+nvtuas-2kydG-WfofbRSJpyODAJWun==W-yO5j2R4meqA@mail.gmail.com
When this assertion was installed (in commit d2f60a3ab), I thought
it was only for catching server logic errors that caused accesses to
catalogs that were undergoing index rebuilds. However, it will also
fire in case of a user-defined index expression that attempts to
access its own table. We occasionally see reports of people trying
to do that, and typically getting unintelligible low-level errors
as a result. We can provide a more on-point message by making this
a regular runtime check.
While at it, adjust the similar error check in
systable_beginscan_ordered to use the same message text. That one
is (probably) not reachable without a coding bug, but we might as
well use a translatable message if we have one.
Per bug #18363 from Alexander Lakhin. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/18363-e3598a5a572d0699@postgresql.org
build_child_join_sjinfo creates a derived SpecialJoinInfo in
the short-lived GEQO context, but afterwards the semi_rhs_exprs
from that may be used in a UniquePath for a child base relation.
This breaks the expectation that all base-relation-level structures
are in the planning-lifespan context, leading to use of a dangling
pointer with probable ensuing crash later on in create_unique_plan.
To fix, copy the expression trees when making a UniquePath.
Per bug #18360 from Alexander Lakhin. This has been broken since
partitionwise joins were added, so back-patch to all supported
branches.
Discussion: https://postgr.es/m/18360-a23caf3157f34e62@postgresql.org
Verify that a user running MERGE with a DO NOTHING clause has
privileges to read the table, even if no columns are referenced. Such
privileges were already required if the ON clause or any of the WHEN
conditions referenced any column at all, so there's no functional change
in practice.
This change fixes an assertion failure in the case where no column is
referenced by the command and the WHEN clauses are all DO NOTHING.
Backpatch to 15, where MERGE was introduced.
Reported-by: Alena Rybakina <a.rybakina@postgrespro.ru>
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/4d65a385-7efa-4436-a825-0869f89d9d92@postgrespro.ru