Presently, we rely on each individual signal handler to save the
initial value of errno and then restore it before returning if
needed. This is easily forgotten and, if missed, often goes
undetected for a long time.
In commit 3b00fdba9f, we introduced a wrapper signal handler
function that checks whether MyProcPid matches getpid(). This
commit moves the aforementioned errno restoration code from the
individual signal handlers to the new wrapper handler so that we no
longer need to worry about missing it.
Reviewed-by: Andres Freund, Noah Misch
Discussion: https://postgr.es/m/20231121212008.GA3742740%40nathanxps13
Previously, an interval microseconds field close to INT64_MAX or
INT64_MIN could overflow, producing a result with not even the
correct sign, while being rounded to match a precision specification.
This seems worth fixing, but not worth back-patching, in part
because the ereturn() notation doesn't exist very far back.
Report and patch by Joseph Koshakow (some cosmetic mods by me)
Discussion: https://postgr.es/m/CAAvxfHfpuLgqJYzkUcher466Z1LpmE+5Sm+zc8L6zKCOQ+6TDQ@mail.gmail.com
Timezones are not immutable and so neither is any function that relies on
them. In commit 66ea94e8, we introduced a few methods which do casting
from one time to another and thus may involve the current timezone. To
preserve the immutability of jsonpath functions currently marked
immutable, disallow these methods from being called from non-TZ aware
functions.
Jeevan Chalke, per a report from Jian He.
Since its introduction, pg_get_expr() has intended to silently
return NULL if called with an invalid relation OID, as can happen
when scanning the catalogs concurrently with relation drops.
However, there is a race condition: we check validity of the OID
at the start, but it could get dropped just afterward, leading to
failures. This is the cause of some intermittent instability we're
seeing in a proposed new test case, and presumably it's a hazard in
the field as well.
We can fix this by AccessShareLock-ing the target relation for the
duration of pg_get_expr(). Since we don't require any permissions
on the target relation, this is semantically a bit undesirable. But
it turns out that the set_relation_column_names() subroutine already
takes a transient AccessShareLock on that relation, and has done since
commit 2ffa740be in 2012. Given the lack of complaints about that, it
seems like there should be no harm in holding the lock a bit longer.
Back-patch to all supported branches.
Discussion: https://postgr.es/m/31ddcc01-a71b-4e8c-9948-01d1c47293ca@eisentraut.org
The TransactionIdInRecentPast() should return false for all the transactions
older than TransamVariables->oldestClogXid. However, the function contains
a bug in comparison FullTransactionId to TransactionID allowing full
transactions between nextXid - 2^32 and oldestClogXid - 2^31.
This commit fixes TransactionIdInRecentPast() by turning the oldestClogXid into
FullTransactionId first, then performing the comparison.
Backpatch to all supported versions.
Reported-by: Egor Chindyaskin
Bug: 18212
Discussion: https://postgr.es/m/18212-547307f8adf57262%40postgresql.org
Author: Karina Litskevich
Reviewed-by: Kyotaro Horiguchi
Backpatch-through: 12
A ResourceOwnerEnlarge() call was missing. That led to an error:
ERROR: ResourceOwnerRemember called but array was full
and an assertion failure, if you tried to extend a temp relation again
after a failure. Alexander's test case used running out of disk space
to trigger the original failure.
This bug was introduced in the large ResourceOwner rewrite commit
b8bff07daa. Before that, the UnpinLocalBuffer() call guaranteed that
the subsequent PinLocalBuffer() will succeed, but after the rewrite,
releasing an old resource doesn't guarantee that there is space for a
new one.
Add a comment explaining why the UnpinBuffer + PinBuffer calls in
BufferAlloc(), with no ResourceOwnerEnlarge() in between, are safe.
Reported-by: Alexander Lakhin
Discussion: https://www.postgresql.org/message-id/dc574fea-c83e-a600-08cd-10881762e4fa@gmail.com
An OS crash could leave PG_VERSION empty or missing. The same symptom
appeared in a backup by block device snapshot, taken after the next
checkpoint and before the OS flushes the PG_VERSION blocks. Device
snapshots are not a documented backup method, however. Back-patch to
v15, where commit 9c08aea6a3 introduced
STRATEGY=WAL_LOG and made it the default.
Discussion: https://postgr.es/m/20240130195003.0a.nmisch@google.com
Restoring a base backup taken in the middle of CreateDirAndVersionFile()
or write_relmap_file() would lose the function's effects. The symptom
was absence of the database directory, PG_VERSION file, or
pg_filenode.map. If missing the directory, recovery would fail. Either
missing file would not fail recovery but would render the new database
unusable. Fix CreateDirAndVersionFile() with the transam/README "action
first and then write a WAL entry" strategy. That has a side benefit of
moving filesystem mutations out of a critical section, reducing the ways
to PANIC. Fix the write_relmap_file() call with a lock acquisition, so
it interacts with checkpoints like non-CREATE DATABASE calls do.
Back-patch to v15, where commit 9c08aea6a3
introduced STRATEGY=WAL_LOG and made it the default.
Discussion: https://postgr.es/m/20240130195003.0a.nmisch@google.com
This impacts the statistics retrieved in transactions for the following
views when updating the value of stats_fetch_consistency, leading to
behaviors contrary to what is documented since 605994651b as an update
of this parameter should discard all statistics snapshot data:
- pg_stat_archiver
- pg_stat_bgwriter
- pg_stat_checkpointer
- pg_stat_io
- pg_stat_slru
- pg_stat_wal
For example, updating stats_fetch_consistency from "snapshot" to "cache"
in a transaction did not re-fetch any fresh data, using data cached from
the time when "snapshot" was in use.
Author: Shinya Kato
Discussion: https://postgr.es/m/d77fc5190d4dbe1738d77231488e768b@oss.nttdata.com
Backpatch-through: 15
After calling smgropen(), it was not clear how long you could continue
to use the result, because various code paths including cache
invalidation could call smgrclose(), which freed the memory.
Guarantee that the object won't be destroyed until the end of the
current transaction, or in recovery, the commit/abort record that
destroys the underlying storage.
smgrclose() is now just an alias for smgrrelease(). It closes files
and forgets all state except the rlocator, but keeps the SMgrRelation
object valid.
A new smgrdestroy() function is used by rare places that know there
should be no other references to the SMgrRelation.
The short version:
* smgrclose() is now just an alias for smgrrelease(). It releases
resources, but doesn't destroy until EOX
* smgrdestroy() now frees memory, and should rarely be used.
Existing code should be unaffected, but it is now possible for code that
has an SMgrRelation object to use it repeatedly during a transaction as
long as the storage hasn't been physically dropped. Such code would
normally hold a lock on the relation.
This also replaces the "ownership" mechanism of SMgrRelations with a
pin counter. An SMgrRelation can now be "pinned", which prevents it
from being destroyed at end of transaction. There can be multiple pins
on the same SMgrRelation. In practice, the pin mechanism is only used
by the relcache, so there cannot be more than one pin on the same
SMgrRelation. Except with swap_relation_files XXX
Author: Thomas Munro, Heikki Linnakangas
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://www.postgresql.org/message-id/CA%2BhUKGJ8NTvqLHz6dqbQnt2c8XCki4r2QvXjBQcXpVwxTY_pvA@mail.gmail.com
predicate.c has been using SerialSLRULock (the control lock for its SLRU
structure) to coordinate access to SerialControlData, another of its
numerous shared memory structures; this is unnecessary and confuses
further SLRU scalability work. Create a separate LWLock to cover
SerialControlData.
Extracted from a larger patch from the same author, and some additional
changes by Álvaro.
Author: Dilip Kumar <dilip.kumar@enterprisedb.com>
Discussion: https://postgr.es/m/CAFiTN-vzDvNz=ExGXz6gdyjtzGixKSqs0mKHMmaQ8sOSEFZ33A@mail.gmail.com
libxml2 changed the required signature of error handler callbacks
to make the passed xmlError struct "const". This is causing build
failures on buildfarm member caiman, and no doubt will start showing
up in the field quite soon. Add a version check to adjust the
declaration of xml_errorHandler() according to LIBXML_VERSION.
2.12.x also produces deprecation warnings for contrib/xml2/xpath.c's
assignment to xmlLoadExtDtdDefaultValue. I see no good reason for
that to still be there, seeing that we disabled external DTDs (at a
lower level) years ago for security reasons. Let's just remove it.
Back-patch to all supported branches, since they might all get built
with newer libxml2 once it gets a bit more popular. (The back
branches produce another deprecation warning about xpath.c's use of
xmlSubstituteEntitiesDefault(). We ought to consider whether to
back-patch all or part of commit 65c5864d7 to silence that. It's
less urgent though, since it won't break the buildfarm.)
Discussion: https://postgr.es/m/1389505.1706382262@sss.pgh.pa.us
This adds a new "Memory:" line under the "Planning:" group (which
currently only has "Buffers:") when the MEMORY option is specified.
In order to make the reporting reasonably accurate, we create a separate
memory context for planner activities, to be used only when this option
is given. The total amount of memory allocated by that context is
reported as "allocated"; we subtract memory in the context's freelists
from that and report that result as "used". We use
MemoryContextStatsInternal() to obtain the quantities.
The code structure to show buffer usage during planning was not in
amazing shape, so I (Álvaro) modified the patch a bit to clean that up
in passing.
Author: Ashutosh Bapat
Reviewed-by: David Rowley, Andrey Lepikhov, Jian He, Andy Fan
Discussion: https://www.postgresql.org/message-id/CAExHW5sZA=5LJ_ZPpRO-w09ck8z9p7eaYAqq3Ks9GDfhrxeWBw@mail.gmail.com
We perform addition of the days field of an interval via
arithmetic on the Julian-date representation of the timestamp's date.
This step is subject to int32 overflow, and we also should not let
the Julian date become very negative, for fear of weird results from
j2date. (In the timestamptz case, allow a Julian date of -1 to pass,
since it might convert back to zero after timezone rotation.)
The additions of the months and microseconds fields could also
overflow, of course. However, I believe we need no additional
checks there; the existing range checks should catch such cases.
The difficulty here is that j2date's magic modular arithmetic could
produce something that looks like it's in-range.
Per bug #18313 from Christian Maurer. This has been wrong for
a long time, so back-patch to all supported branches.
Discussion: https://postgr.es/m/18313-64d2c8952d81e84b@postgresql.org
This reverts commit 2197d06224, following a discussion over a Coverity
report where issues like the "Billion laugh attack" could cause the
backend to waste CPU and memory even if a client applied checks on the
size of the data given in input, and libxml2 does not offer guarantees
that input limits are respected under XML_PARSE_HUGE.
Discussion: https://postgr.es/m/ZbHlgrPLtBZyr_QW@paquier.xyz
Formerly, these were only supported in to_char(), but there seems
little reason for that restriction. We should at least have enough
support to permit round-tripping the output of to_char().
In that spirit, TZ accepts either zone abbreviations or numeric
(HH or HH:MM) offsets, which are the cases that to_char() can output.
In an ideal world we'd make it take full zone names too, but
that seems like it'd introduce an unreasonable amount of ambiguity,
since the rules for POSIX-spec zone names are so lax.
OF is a subset of this, accepting only HH or HH:MM.
One small benefit of this improvement is that we can simplify
jsonpath's executeDateTimeMethod function, which no longer needs
to consider the HH and HH:MM cases separately. Moreover, letting
it accept zone abbreviations means it will accept "Z" to mean UTC,
which is emitted by JSON.stringify() for example.
Patch by me, reviewed by Aleksander Alekseev and Daniel Gustafsson
Discussion: https://postgr.es/m/1681086.1686673242@sss.pgh.pa.us
Remove a buggy and unnecessary test, along with an unnecessary pstrdup()
and a line of dead code.
Per report, diagnosis and fix from Tom Lane
Discussion: https://postgr.es/m/439811.1706211069@sss.pgh.pa.us
This commit implements ithe jsonpath .bigint(), .boolean(),
.date(), .decimal([precision [, scale]]), .integer(), .number(),
.string(), .time(), .time_tz(), .timestamp(), and .timestamp_tz()
methods.
.bigint() converts the given JSON string or a numeric value to
the bigint type representation.
.boolean() converts the given JSON string, numeric, or boolean
value to the boolean type representation. In the numeric case, only
integers are allowed. We use the parse_bool() backend function
to convert a string to a bool.
.decimal([precision [, scale]]) converts the given JSON string
or a numeric value to the numeric type representation. If precision
and scale are provided for .decimal(), then it is converted to the
equivalent numeric typmod and applied to the numeric number.
.integer() and .number() convert the given JSON string or a
numeric value to the int4 and numeric type representation.
.string() uses the datatype's output function to convert numeric
and various date/time types to the string representation.
The JSON string representing a valid date/time is converted to the
specific date or time type representation using jsonpath .date(),
.time(), .time_tz(), .timestamp(), .timestamp_tz() methods. The
changes use the infrastructure of the .datetime() method and perform
the datatype conversion as appropriate. Unlike the .datetime()
method, none of these methods accept a format template and use ISO
DateTime format instead. However, except for .date(), the
date/time related methods take an optional precision to adjust the
fractional seconds.
Jeevan Chalke, reviewed by Peter Eisentraut and Andrew Dunstan.
This adds a Node *escontext parameter to it and a bunch of functions
downstream to it, replacing any ereport()s in that path by either
errsave() or ereturn() as appropriate. This also adds code to those
functions where necessary to return early upon encountering a soft
error.
The changes here are mainly intended to suppress errors in the
functions of jsonfuncs.c. Functions in any external modules, such as
arrayfuncs.c, that those functions may in turn call are not changed
here based on the assumption that the various checks in jsonfuncs.c
functions should ensure that only values that are structurally valid
get passed to the functions in those external modules. An exception
is made for domain_check() to allow handling domain constraint
violation errors softly.
For testing, this adds a function jsonb_populate_record_valid(),
which returns true if jsonb_populate_record() would finish without
causing an error for the provided JSON object, false otherwise. Note
that jsonb_populate_record() internally calls populate_record(),
which in turn uses populate_record_field().
Extracted from a much larger patch to add SQL/JSON query functions.
Author: Nikita Glukhov <n.gluhov@postgrespro.ru>
Author: Teodor Sigaev <teodor@sigaev.ru>
Author: Oleg Bartunov <obartunov@gmail.com>
Author: Alexander Korotkov <aekorotkov@gmail.com>
Author: Andrew Dunstan <andrew@dunslane.net>
Author: Amit Langote <amitlangote09@gmail.com>
Reviewers have included (in no particular order) Andres Freund,
Alexander Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers,
Zihong Yu, Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby,
Álvaro Herrera, Jian He, Peter Eisentraut
Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru
Discussion: https://postgr.es/m/20220616233130.rparivafipt6doj3@alap3.anarazel.de
Discussion: https://postgr.es/m/abd9b83b-aa66-f230-3d6d-734817f0995d%40postgresql.org
Discussion: https://postgr.es/m/CA+HiwqHROpf9e644D8BRqYvaAPmgBZVup-xKMDPk-nd4EpgzHw@mail.gmail.com
Discussion: https://postgr.es/m/CA+HiwqE4XTdfb1nW=Ojoy_tQSRhYt-q_kb6i5d4xcKyrLC1Nbg@mail.gmail.com
This commit fixes an error message that failed to show the correct
function and library names when a function cannot be loaded.
While on it, adjust the call to load_external_function() so as this
ERROR can be reached, by making load_external_function() return NULL
rather than fail if a function cannot be found for a given injection
point.
Thinkos in d86d20f0ba.
1. The memcpy()s in InjectionPointAttach() would copy garbage from
beyond the end of input string to the buffer in shared memory. You
won't usually notice, but if there is not enough valid mapped memory
beyond the end of the string, the read of unmapped memory will
segfault. This was flagged by the Cirrus CI build with address
sanitizer enabled.
2. The memcpy() in injection_point_cache_add() failed to copy the NULL
terminator.
Discussion: https://www.postgresql.org/message-id/0615a424-b726-4157-afa7-4245629f9512%40iki.fi
Injection points are a new facility that makes possible for developers
to run custom code in pre-defined code paths. Its goal is to provide
ways to design and run advanced tests, for cases like:
- Race conditions, where processes need to do actions in a controlled
ordered manner.
- Forcing a state, like an ERROR, FATAL or even PANIC for OOM, to force
recovery, etc.
- Arbitrary sleeps.
This implements some basics, and there are plans to extend it more in
the future depending on what's required. Hence, this commit adds a set
of routines in the backend that allows developers to attach, detach and
run injection points:
- A code path calling an injection point can be declared with the macro
INJECTION_POINT(name).
- InjectionPointAttach() and InjectionPointDetach() to respectively
attach and detach a callback to/from an injection point. An injection
point name is registered in a shmem hash table with a library name and a
function name, which will be used to load the callback attached to an
injection point when its code path is run.
Injection point names are just strings, so as an injection point can be
declared and run by out-of-core extensions and modules, with callbacks
defined in external libraries.
This facility is hidden behind a dedicated switch for ./configure and
meson, disabled by default.
Note that backends use a local cache to store callbacks already loaded,
cleaning up their cache if a callback has found to be removed on a
best-effort basis. This could be refined further but any tests but what
we have here was fine with the tests I've written while implementing
these backend APIs.
Author: Michael Paquier, with doc suggestions from Ashutosh Bapat.
Reviewed-by: Ashutosh Bapat, Nathan Bossart, Álvaro Herrera, Dilip
Kumar, Amul Sul, Nazir Bilal Yavuz
Discussion: https://postgr.es/m/ZTiV8tn_MIb_H2rE@paquier.xyz
When evaluating a query with a multi-column GROUP BY clause, we can minimize
sort operations or avoid them if we synchronize the order of GROUP BY clauses
with the ORDER BY sort clause or sort order, which comes from the underlying
query tree. Grouping does not imply any ordering, so we can compare
the keys in arbitrary order, and a Hash Agg leverages this. But for Group Agg,
we simply compared keys in the order specified in the query. This commit
explores alternative ordering of the keys, trying to find a cheaper one.
The ordering of group keys may interact with other parts of the query, some of
which may not be known while planning the grouping. For example, there may be
an explicit ORDER BY clause or some other ordering-dependent operation higher up
in the query, and using the same ordering may allow using either incremental
sort or even eliminating the sort entirely.
The patch always keeps the ordering specified in the query, assuming the user
might have additional insights.
This introduces a new GUC enable_group_by_reordering so that the optimization
may be disabled if needed.
Discussion: https://postgr.es/m/7c79e6a5-8597-74e8-0671-1c39d124c9d6%40sigaev.ru
Author: Andrei Lepikhov, Teodor Sigaev
Reviewed-by: Tomas Vondra, Claudio Freire, Gavin Flower, Dmitry Dolgov
Reviewed-by: Robert Haas, Pavel Borisov, David Rowley, Zhihong Yu
Reviewed-by: Tom Lane, Alexander Korotkov, Richard Guo, Alena Rybakina
These support functions will transform expressions with constant
range values into direct comparisons on the range bound values,
which are frequently better-optimizable. The transformation is
skipped however if it would require double evaluation of a
volatile or expensive element expression.
Along the way, add the range opfamily OID to range typcache entries,
since load_rangetype_info has to compute that anyway and it seems
silly to duplicate the work later.
Kim Johan Andersson and Jian He, reviewed by Laurenz Albe
Discussion: https://postgr.es/m/94f64d1f-b8c0-b0c5-98bc-0793a34e0851@kimmet.dk
Presently, the most straightforward way for a shared library to use
shared memory is to request it at server startup via a
shmem_request_hook, which requires specifying the library in
shared_preload_libraries. Alternatively, the library can create a
dynamic shared memory (DSM) segment, but absent a shared location
to store the segment's handle, other backends cannot use it. This
commit introduces a registry for DSM segments so that these other
backends can look up existing segments with a library-specified
string. This allows libraries to easily use shared memory without
needing to request it at server startup.
The registry is accessed via the new GetNamedDSMSegment() function.
This function handles allocating the segment and initializing it
via a provided callback. If another backend already created and
initialized the segment, it simply attaches the segment.
GetNamedDSMSegment() locks the registry appropriately to ensure
that only one backend initializes the segment and that all other
backends just attach it.
The registry itself is comprised of a dshash table that stores the
DSM segment handles keyed by a library-specified string.
Reviewed-by: Michael Paquier, Andrei Lepikhov, Nikita Malakhov, Robert Haas, Bharath Rupireddy, Zhang Mingli, Amul Sul
Discussion: https://postgr.es/m/20231205034647.GA2705267%40nathanxps13
The comment was copy-pasted from the call to ProcSignalInit() in
AuxiliaryProcessMain(), which uses a similar scheme of having reserved
slots for aux processes after MaxBackends slots for backends. However,
ProcSignalInit() indexing starts from 1, whereas BackendStatusArray
starts from 0. The code is correct, but the comment was wrong.
Discussion: https://www.postgresql.org/message-id/f3ecd4cb-85ee-4e54-8278-5fabfb3a4ed0@iki.fi
Backpatch-through: v14
This commit adds XML_PARSE_HUGE to the libxml2 functions used in core
for the parsing of XML objects, raising up the original limit of 10MB
supported by libxml2.
In most code paths of upstream, XML_MAX_TEXT_LENGTH (10^7) is the
historical limit that gets upgraded to XML_MAX_HUGE_LENGTH (10^9) once
XML_PARSE_HUGE is given to the parser calls. These are still limited by
any palloc() calls for text, up to 1GB.
This offers the possibility to handle within the backend XML objects
larger than 10MB in general, with also a higher depth limit. This
change affects the contrib module xml2, the xml data type and SQL/XML.
Author: Dmitry Koval
Reviewed-by: Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/18274-98d16bc03520665f@postgresql.org
As in c27f8621ee et al.
Not as critical as other cases we've handled, but I figure if we're
going to add JsonbTableRoutine using TableFuncRoutine, this makes it
easier to jump around the code.
We've long had a policy that any toasted fields in a catalog tuple
should be pulled in-line before entering the tuple in a catalog cache.
However, that requires access to the catalog's toast table, and we'll
typically do AcceptInvalidationMessages while opening the toast table.
So it's possible that the catalog tuple is outdated by the time we
finish detoasting it. Since no cache entry exists yet, we can't
mark the entry stale during AcceptInvalidationMessages, and instead
we'll press forward and build an apparently-valid cache entry. The
upshot is that we have a race condition whereby an out-of-date entry
could be made in a backend's catalog cache, and persist there
indefinitely causing indeterminate misbehavior.
To fix, use the existing systable_recheck_tuple code to recheck
whether the catalog tuple is still up-to-date after we finish
detoasting it. If not, loop around and restart the process of
searching the catalog and constructing cache entries from the top.
The case is rare enough that this shouldn't create any meaningful
performance penalty, even in the SearchCatCacheList case where
we need to tear down and reconstruct the whole list.
Indeed, the case is so rare that AFAICT it doesn't occur during
our regression tests, and there doesn't seem to be any easy way
to build a test that would exercise it reliably. To allow
testing of the retry code paths, add logic (in USE_ASSERT_CHECKING
builds only) that randomly pretends that the recheck failed about
one time out of a thousand. This is enough to ensure that we'll
pass through the retry paths during most regression test runs.
By adding an extra level of looping, this commit creates a need
to reindent most of SearchCatCacheMiss and SearchCatCacheList.
I'll do that separately, to allow putting those changes in
.git-blame-ignore-revs.
Patch by me; thanks to Alexander Lakhin for having built a test
case to prove the bug is real, and to Xiaoran Wang for review.
Back-patch to all supported branches.
Discussion: https://postgr.es/m/1393953.1698353013@sss.pgh.pa.us
Discussion: https://postgr.es/m/CAGjhLkOoBEC9mLsnB42d3CO1vcMx71MLSEuigeABbQ8oRdA6gw@mail.gmail.com
This changes the pg_attribute field attstattarget into a nullable
field in the variable-length part of the row. If no value is set by
the user for attstattarget, it is now null instead of previously -1.
This saves space in pg_attribute and tuple descriptors for most
practical scenarios. (ATTRIBUTE_FIXED_PART_SIZE is reduced from 108
to 104.) Also, null is the semantically more correct value.
The ANALYZE code internally continues to represent the default
statistics target by -1, so that that code can avoid having to deal
with null values. But that is now contained to the ANALYZE code.
Only the DDL code deals with attstattarget possibly null.
For system columns, the field is now always null. The ANALYZE code
skips system columns anyway.
To set a column's statistics target to the default value, the new
command form ALTER TABLE ... SET STATISTICS DEFAULT can be used. (SET
STATISTICS -1 still works.)
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://www.postgresql.org/message-id/flat/4da8d211-d54d-44b9-9847-f2a9f1184c76@eisentraut.org
jit.c and dfgr.c had a copy of the same code to check if a file exists
or not, with a twist: jit.c did not check for EACCES when failing the
stat() call for the path whose existence is tested. This refactored
routine will be used by an upcoming patch.
Reviewed-by: Ashutosh Bapat
Discussion: https://postgr.es/m/ZTiV8tn_MIb_H2rE@paquier.xyz
In commit 3e51b278d I (tgl) caused initdb to leave lc_messages and
other lc_xxx GUCs commented-out in the installed postgresql.conf file
if they were going to be set to 'C'. This was a hack for cosmetic
purposes, and it was buggy because lc_messages' wired-in default is
not 'C' but '' (empty string). That led to --no-locale not having
the expected effect, since the postmaster would then obtain
lc_messages from its startup environment.
Let's just revert to the prior behavior of always de-commenting the
lc_xxx entries; the argument for changing that longstanding behavior
was weak in the first place.
Also, fix postgresql.conf.sample's erroneous claim that the default
value of lc_messages is 'C'. I suspect that was what misled me into
making this mistake in the first place.
Report and patch by Kyotaro Horiguchi. Back-patch to v16 where
the problem was introduced.
Discussion: https://postgr.es/m/20231122.162700.1995154567625541112.horikyota.ntt@gmail.com
Both lwlocknames.txt and wait_event_names.txt contain a list of all
the predefined LWLocks, i.e., those with predefined positions
within MainLWLockArray. It is easy to miss one or the other,
especially since the list in wait_event_names.txt omits the "Lock"
suffix from all the LWLock wait events. This commit adds a cross-
check of these lists to the script that generates lwlocknames.h.
If the lists do not match exactly, building will fail.
Suggested-by: Robert Haas
Reviewed-by: Robert Haas, Michael Paquier, Bertrand Drouvot
Discussion: https://postgr.es/m/20240102173120.GA1061678%40nathanxps13
Since commit 599b33b94, this function assumed that every RTE_RELATION
RangeTblEntry would have an associated RelOptInfo. But that's not so:
we only build RelOptInfos for relations that are scanned by the query.
In particular the target of an INSERT won't have one, so that Vars
appearing in an INSERT ... RETURNING list will not have an associated
RelOptInfo. This apparently wasn't a problem before commit f7816aec2
taught examine_simple_variable() to drill down into CTEs containing
INSERT RETURNING, but it is now.
To fix, add a fallback code path that gets the userid to use directly
from the RTEPermissionInfo associated with the RTE. (Sadly, we must
have two code paths, because not every RTE has a RTEPermissionInfo
either.)
Per report from Alexander Lakhin. No back-patch, since the case is
apparently unreachable before f7816aec2.
Discussion: https://postgr.es/m/608a4886-6c60-0f9e-97d5-591256bd4150@gmail.com
If we have DECHIST statistics about the argument expression, use
the average number of distinct elements as the array length estimate.
(It'd be better to use the average total number of elements, but
that is not currently calculated by compute_array_stats(), and
it's unclear that it'd be worth extra effort to get.)
To do this, we have to change the signature of estimate_array_length
to pass the "root" pointer. While at it, also change its result
type to "double". That's probably not really necessary, but it
avoids any risk of overflow of the value extracted from DECHIST.
All existing callers are going to use the result in a "double"
calculation anyway.
Paul Jungwirth, reviewed by Jian He and myself
Discussion: https://postgr.es/m/CA+renyUnM2d+SmrxKpDuAdpiq6FOM=FByvi6aS6yi__qyf6j9A@mail.gmail.com
Second attempt at 283a95da92. Since we can't reorder the enum values
of JsonPathItemType, instead reorder the switch cases where they are
used to generally follow the order of the enum values, for better
maintainability.
This reverts commit 283a95da92.
The reordering of JsonPathItemType affects the binary on-disk
compatibility of the jsonpath type, so we must not change it. Revert
for now and consider.