Since introduction of extended statistics, we've disallowed references
to system columns. So for example
CREATE STATISTICS s ON ctid FROM t;
would fail. But with extended statistics on expressions, it was possible
to work around this limitation quite easily
CREATE STATISTICS s ON (ctid::text) FROM t;
This is an oversight in a4d75c86bf, fixed by adding a simple check.
Backpatch to PostgreSQL 14, where support for extended statistics on
expressions was introduced.
Backpatch-through: 14
Discussion: https://postgr.es/m/20210816013255.GS10479%40telsasoft.com
Improve various item descriptions. Rearrange some things into
(IMO) more logical order. Fix missing markup and dubious
choices of link destinations. Drop a couple of items that
were later back-patched or otherwise don't seem to need
to be documented here.
An elog that reports the value of a transaction ID stored on a deleted
nbtree page was added by commit e5d8a999, which taught page deletion to
store full 64-bit XIDs. It seems very chatty on further reflection, so
lower its elevel from NOTICE to DEBUG2.
Author: Peter Geoghegan <pg@bowt.ie>
Backpatch: 14-, just like the nbtree XID enhancement.
Commit 55dc86eca changed pull_varnos to use (if possible) the associated
ph_eval_at for a PlaceHolderVar. I missed a fine point though: we might
be looking at a PHV in the quals or tlist of a child appendrel, in which
case we need to compute a ph_eval_at value that's been translated in the
same way that the PHV itself has been (cf. adjust_appendrel_attrs).
Fortunately, enough info is available in the PlaceHolderInfo to make
such translation possible without additional outside data, so we don't
need another round of uglification of planner APIs. This is a little
bit complicated, but since it's a hard-to-hit corner case, I'm not much
worried about adding cycles here.
Per report from Jaime Casanova. Back-patch to v12, like the previous
commit.
Discussion: https://postgr.es/m/20210915230959.GB17635@ahch-to
The rewriter transformation for SEARCH BREADTH FIRST produces a
FieldSelect on a Var of type RECORD, where the Var references the
recursive union's worktable output. EXPLAIN VERBOSE failed to handle
this case, because it only expected such Vars to appear in CteScans
not WorkTableScans. Fix that, and add some test cases exercising
EXPLAIN on SEARCH and CYCLE queries.
In principle this oversight is an old bug, but it seems that the
case is unreachable without SEARCH BREADTH FIRST, because the
parser fails when attempting to create such a reference manually.
So for today I'll just patch HEAD/v14. Someday we might find that
the code portion of this patch needs to be back-patched further.
Per report from Atsushi Torikoshi.
Discussion: https://postgr.es/m/5bafa66ad529e11860339565c9e7c166@oss.nttdata.com
Session statistics, as introduced by 960869da08, had several shortcomings:
- an additional GetCurrentTimestamp() call that also impaired the accuracy of
the data collected
This can be avoided by passing the current timestamp we already have in
pgstat_report_stat().
- an additional statistics UDP packet sent every 500ms
This is solved by adding the new statistics to PgStat_MsgTabstat.
This is conceptually ugly, because session statistics are not
table statistics. But the struct already contains data unrelated
to tables, so there is not much damage done.
Connection and disconnection are reported in separate messages, which
reduces the number of additional messages to two messages per session and a
slight increase in PgStat_MsgTabstat size (but the same number of table
stats fit).
- Session time computation could overflow on systems where long is 32 bit.
Reported-By: Andres Freund <andres@anarazel.de>
Author: Andres Freund <andres@anarazel.de>
Author: Laurenz Albe <laurenz.albe@cybertec.at>
Discussion: https://postgr.es/m/20210801205501.nyxzxoelqoo4x2qc%40alap3.anarazel.de
Backpatch: 14-, where the feature was introduced.
ProcArrayGroupClearXid function has a parameter named "proc",
but the same name was used for its local variables. This commit fixes
this variable shadowing, to improve code readability.
Back-patch to all supported versions, to make future back-patching
easy though this patch is classified as refactoring only.
Reported-by: Ranier Vilela
Author: Ranier Vilela, Aleksander Alekseev
https://postgr.es/m/CAEudQAqyoTZC670xWi6w-Oe2_Bk1bfu2JzXz6xRfiOUzm7xbyQ@mail.gmail.com
All size_t variables declared in procarray.c are actually int ones.
Let's use int instead of size_t for those variables. Which would
reduce Wsign-compare compiler warnings.
Back-patch to v14 where commit 941697c3c1 added size_t variables
in procarray.c, to make future back-patching easy though
this patch is classified as refactoring only.
Reported-by: Ranier Vilela
Author: Ranier Vilela, Aleksander Alekseev
https://postgr.es/m/CAEudQAqyoTZC670xWi6w-Oe2_Bk1bfu2JzXz6xRfiOUzm7xbyQ@mail.gmail.com
It's possible to execute user-defined SQL in some background processes;
for example, logical replication workers can fire triggers. This opens
the possibility that someone would try to execute LISTEN in such a
context. But since only regular backends ever call
ProcessNotifyInterrupt, no messages would actually be received, and
thus the registered listener would simply prevent the message queue
from being cleaned. Eventually NOTIFY would stop working, which is bad.
Perhaps someday somebody will invent infrastructure to make listening
in a background worker actually useful. In the meantime, forbid it.
Back-patch to v13, which is where we introduced the MyBackendType
variable. It'd be a lot harder to implement the check without that,
and it doesn't seem worth the trouble.
Discussion: https://postgr.es/m/153243441449.1404.2274116228506175596@wrigleys.postgresql.org
Commit 054adca641ac1279dc8d9b74fda41948ac35e9a9 neglected to
initialize the type_id field of the synthesized type cache entry, so
it would make a new one on every call.
Also, better use the per-function memory context for this; otherwise
it leaks memory.
Discussion: https://www.postgresql.org/message-id/flat/17158-8a2ba823982537a4%40postgresql.org
The available refresh options are specified as refresh_options under
REFRESH PUBLICATION, and DROP PUBLICATION itself has an option named
refresh. Clarify what we mean by refresh options to avoid confusion.
Backpatch through v14 where ALTER SUBSCRIPTION ... DROP PUBLICATION
was introduced.
Author: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Discussion: https://postgr.es/m/CAD21AoCm1wJ3A8Q9EmBjRbShYkJ+o+Oa_z9O0hvwhvhUa2BSyg@mail.gmail.com
Backpatch-through: 14
Formerly, we sent signals for outgoing NOTIFY messages within
ProcessCompletedNotifies, which was also responsible for sending
relevant ones of those messages to our connected client. It therefore
had to run during the main-loop processing that occurs just before
going idle. This arrangement had two big disadvantages:
* Now that procedures allow intra-command COMMITs, it would be
useful to send NOTIFYs to other sessions immediately at COMMIT
(though, for reasons of wire-protocol stability, we still shouldn't
forward them to our client until end of command).
* Background processes such as replication workers would not send
NOTIFYs at all, since they never execute the client communication
loop. We've had requests to allow triggers running in replication
workers to send NOTIFYs, so that's a problem.
To fix these things, move transmission of outgoing NOTIFY signals
into AtCommit_Notify, where it will happen during CommitTransaction.
Also move the possible call of asyncQueueAdvanceTail there, to
ensure we don't bloat the async SLRU if a background worker sends
many NOTIFYs with no one listening.
We can also drop the call of asyncQueueReadAllNotifications,
allowing ProcessCompletedNotifies to go away entirely. That's
because commit 790026972 added a call of ProcessNotifyInterrupt
adjacent to PostgresMain's call of ProcessCompletedNotifies,
and that does its own call of asyncQueueReadAllNotifications,
meaning that we were uselessly doing two such calls (inside two
separate transactions) whenever inbound notify signals coincided
with an outbound notify. We need only set notifyInterruptPending
to ensure that ProcessNotifyInterrupt runs, and we're done.
The existing documentation suggests that custom background workers
should call ProcessCompletedNotifies if they want to send NOTIFY
messages. To avoid an ABI break in the back branches, reduce it
to an empty routine rather than removing it entirely. Removal
will occur in v15.
Although the problems mentioned above have existed for awhile,
I don't feel comfortable back-patching this any further than v13.
There was quite a bit of churn in adjacent code between 12 and 13.
At minimum we'd have to also backpatch 51004c717, and a good deal
of other adjustment would also be needed, so the benefit-to-risk
ratio doesn't look attractive.
Per bug #15293 from Michael Powers (and similar gripes from others).
Artur Zakirov and Tom Lane
Discussion: https://postgr.es/m/153243441449.1404.2274116228506175596@wrigleys.postgresql.org
It's possible for us to copy an AlternativeSubPlan expression node
into multiple places, for example the scan quals of several
partition children. Then it's possible that we choose a different
one of the alternatives as optimal in each place. Commit 41efb8340
failed to consider this scenario, so its attempt to remove "unused"
subplans could remove subplans that were still used elsewhere.
Fix by delaying the removal logic until we've examined all the
AlternativeSubPlans in a given query level. (This does assume that
AlternativeSubPlans couldn't get copied to other query levels, but
for the foreseeable future that's fine; cf qual_is_pushdown_safe.)
Per report from Rajkumar Raghuwanshi. Back-patch to v14
where the faulty logic came in.
Discussion: https://postgr.es/m/CAKcux6==O3NNZC3bZ2prRYv3cjm3_Zw1GfzmOjEVqYN4jub2+Q@mail.gmail.com
If an allocation failed within LLVM it is not safe to call back into LLVM as
LLVM is not generally safe against exceptions / stack-unwinding. Thus errors
while in LLVM code are promoted to FATAL. However llvm_shutdown() did call
back into LLVM even in such cases, while llvm_release_context() was careful
not to do so.
We cannot generally skip shutting down LLVM, as that can break profiling. But
it's OK to do so if there was an error from within LLVM.
Reported-By: Jelte Fennema <Jelte.Fennema@microsoft.com>
Author: Andres Freund <andres@anarazel.de>
Author: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/AM5PR83MB0178C52CCA0A8DEA0207DC14F7FF9@AM5PR83MB0178.EURPRD83.prod.outlook.com
Backpatch: 11-, where jit was introduced
In d9d8aa9bb9a I added a defensive NULL assignment to protect against a
not-too-smart compiler warning about unitialized variable use after the
switch. Unfortunately I only did so on master and forgot to adjust that for
14.
Stephen noticed that there actually is a compiler warning :(.
Reported-By: Stephen Frost <sfrost@snowman.net>
Discussion: https://postgr.es/m/20210827224639.GX17906@tamriel.snowman.net
Commits ffa2e4670 and 52a10224e caused libpq's connection-establishment
functions to usually leave a nonempty string in the connection's
errorMessage buffer, even after a successful connection. While that
was intentional on my part, more sober reflection says that it wasn't
a great idea: the string would be a bit confusing. Also this broke at
least one application that checked for connection success by examining
the errorMessage, instead of using PQstatus() as documented. Let's
clear the buffer at success exit, restoring the pre-v14 behavior.
Discussion: https://postgr.es/m/4170264.1620321747@sss.pgh.pa.us
Ordinarily, using EXIT this way would draw "control reached end of
function without RETURN". However, if the function is one where we
don't require an explicit RETURN (such as a DO block), that should
not happen. It did anyway, because add_dummy_return() neglected to
account for the case.
Per report from Herwig Goemans. Back-patch to all supported branches.
Discussion: https://postgr.es/m/868ae948-e3ca-c7ec-95a6-83cfc08ef750@gmail.com
While processing toast changes in logical decoding, we rejigger the
tuple change to point to in-memory toast tuples instead to on-disk toast
tuples. And, to make sure the memory accounting is correct, we were
subtracting the old change size and then after re-computing the new tuple,
re-adding its size at the end. Now, if there is any error before we add
the new size, we will release the changes and that will update the
accounting info (subtracting the size from the counters). And we were
underflowing there which leads to an assertion failure in assert enabled
builds and wrong memory accounting in reorder buffer otherwise.
Author: Bertrand Drouvot
Reviewed-by: Amit Kapila
Backpatch-through: 13, where memory accounting was introduced
Discussion: https://postgr.es/m/92b0ee65-b8bd-e42d-c082-4f3f4bf12d34@amazon.com
An out-of-memory failure happening when allocating the structures to
store the connection parameter keywords and values would mess up with
the set of connections saved, as on failure the pthread mutex would
still be hold with the new connection object listed but free()'d.
Rather than just unlocking the mutex, which would leave the static list
of connections into an inconsistent state, move the allocation for the
structures of the connection parameters before beginning the test
manipulation. This ensures that the list of connections and the
connection mutex remain consistent all the time in this code path.
This error is unlikely going to happen, but this could mess up badly
with ECPG clients in surprising ways, so backpatch all the way down.
Reported-by: ryancaicse
Discussion: https://postgr.es/m/17186-b4cfd8f0eb4d1dee@postgresql.org
Backpatch-through: 9.6
If search_start is greater than the length of the string, we should just
return REG_NOMATCH immediately. (Note that the equality case should
*not* be rejected, since the pattern might be able to match zero
characters.) This guards various internal assumptions that the min of a
range of string positions is not more than the max. Violation of those
assumptions could allow an attempt to fetch string[search_start-1],
possibly causing a crash.
Jaime Casanova pointed out that this situation is reachable with the
new regexp_xxx functions that accept a user-specified start position.
I don't believe it's reachable via any in-core call site in v14 and
below. However, extensions could possibly call pg_regexec with an
out-of-range search_start, so let's back-patch the fix anyway.
Discussion: https://postgr.es/m/20210911180357.GA6870@ahch-to
We have long forbidden fetching backwards from a NO SCROLL cursor,
but the prohibition didn't extend to cases in which we rewind the
query altogether and then re-fetch forwards. I think the reason is
that this logic was mainly meant to protect plan nodes that can't
be run in the reverse direction. However, re-reading the query output
is problematic if the query is volatile (which includes SELECT FOR
UPDATE, not just queries with volatile functions): the re-read can
produce different results, which confuses the cursor navigation logic
completely. Another reason for disliking this approach is that some
code paths will either fetch backwards or rewind-and-fetch-forwards
depending on the distance to the target row; so that seemingly
identical use-cases may or may not draw the "cursor can only scan
forward" error. Hence, let's clean things up by disallowing rewind
as well as fetch-backwards in a NO SCROLL cursor.
Ordinarily we'd only make such a definitional change in HEAD, but
there is a third reason to consider this change now. Commit ba2c6d6ce
created some new user-visible anomalies for non-scrollable cursors
WITH HOLD, in that navigation in the cursor result got confused if the
cursor had been partially read before committing. The only good way
to resolve those anomalies is to forbid rewinding such a cursor, which
allows removal of the incorrect cursor state manipulations that
ba2c6d6ce added to PersistHoldablePortal.
To minimize the behavioral change in the back branches (including
v14), refuse to rewind a NO SCROLL cursor only when it has a holdStore,
ie has been held over from a previous transaction due to WITH HOLD.
This should avoid breaking most applications that have been sloppy
about whether to declare cursors as scrollable. We'll enforce the
prohibition across-the-board beginning in v15.
Back-patch to v11, as ba2c6d6ce was.
Discussion: https://postgr.es/m/3712911.1631207435@sss.pgh.pa.us
Some plan node types don't react well to being called again after
they've already returned NULL. PortalRunSelect() has long dealt
with this by calling the executor with NoMovementScanDirection
if it sees that we've already run the portal to the end. However,
commit ba2c6d6ce overlooked this point, so that persisting an
already-fully-fetched cursor would fail if it had such a plan.
Per report from Tomas Barton. Back-patch to v11, as the faulty
commit was. (I've omitted a test case because the type of plan
that causes a problem isn't all that stable.)
Discussion: https://postgr.es/m/CAPV2KRjd=ErgVGbvO2Ty20tKTEZZr6cYsYLxgN_W3eAo9pf5sw@mail.gmail.com
When throttling is used, transactions that lag behind schedule by
more than the latency limit are counted and reported as skipped.
Previously, there was the case where pgbench counted all skipped
transactions even if the timer specified in -T option was exceeded.
This could take a very long time to do that especially when unrealistically
high rate setting in -R option caused quite a lot of transactions that
lagged behind schedule. This could prevent pgbench from ending
immediately, and so pgbench could look like it got stuck to users.
To fix the issue, this commit changes pgbench so that it stops counting
skipped transactions as soon as the timer is exceeded. The timer can
make pgbench end soon even when there are lots of skipped transactions
that have not been counted yet.
Note that there is no guarantee that all skipped transactions are
counted under -T though there is under -t. This is OK in practice
because it's very unlikely to happen with realistic setting. Also this is
not the issue that this commit newly introdues. There used to be
the case where pgbench ended without counting all skipped
transactions since before.
Back-patch to v14. Per discussion, we decided not to bother
back-patch to the stable branches because it's hard to imagine
the issue happens in practice (with realistic setting).
Author: Yugo Nagata, Fabien COELHO
Reviewed-by: Greg Sabino Mullane, Fujii Masao
Discussion: https://postgr.es/m/20210613040151.265ff59d832f835bbcf8b3ba@sraoss.co.jp
We don't allow relations to exceed 2^32-1 blocks, because block
numbers are 32 bits and the last possible block number is reserved
to mean InvalidBlockNumber. There is a check for this in mdextend,
but that's really way too late, because the smgr API requires us to
create a buffer for the block-to-be-added, and we do not want to
have any buffer with blocknum InvalidBlockNumber. (Such a case
can trigger assertions in bufmgr.c, plus I think it might confuse
ReadBuffer's logic for data-past-EOF later on.) So put the check
into ReadBuffer.
Per report from Christoph Berg. It's been like this forever,
so back-patch to all supported branches.
Discussion: https://postgr.es/m/YTn1iTkUYBZfcODk@msg.credativ.de
Previously, walreceiver always closed the currently-opened WAL segment
and created its archive notification file, after it finished writing
the current segment up and received any WAL data that should be
written into the next segment. If walreceiver exited just before
any WAL data in the next segment arrived at standby, it did not
create the archive notification file of the current segment
even though that's known completed. This behavior could cause
WAL archiving of the segment to be delayed until subsequent
restartpoints or checkpoints created its notification file.
To fix the issue, this commit changes walreceiver so that it creates
an archive notification file of a current WAL segment immediately
if that's known completed before receiving next WAL data.
Back-patch to all supported branches.
Reported-by: Kyotaro Horiguchi
Author: Fujii Masao
Reviewed-by: Kyotaro Horiguchi
Discussion: https://postgr.es/m/20200630.165503.1465894182551545886.horikyota.ntt@gmail.com
Coverity complained that one caller of getFormattedTypeName() failed
to free the returned string. Which is true, but rather than fixing
that one, let's get rid of this tedious and error-prone requirement.
Now that getFormattedTypeName() caches its result, strdup'ing that
result and expecting the caller to free it accomplishes little except
to waste cycles. We do create a leak in the case where getTypes didn't
make a TypeInfo for the type, but that basically shouldn't ever happen.
Back-patch, as commit 6c450a861 was. This isn't a particularly
interesting bug fix, but the API change seems like a hazard for
future back-patching activity if we don't back-patch it.
If we copy data-modifying CTEs from the original query to a replacement
query (from a DO INSTEAD rule), we must set hasModifyingCTE properly
in the replacement query. Failure to do this can cause various
unpleasantness, such as unsafe usage of parallel plans. The code also
neglected to propagate hasRecursive, though that's only cosmetic at
the moment.
A difficulty arises if the rule action is an INSERT...SELECT. We
attach the original query's RTEs and CTEs to the sub-SELECT Query, but
data-modifying CTEs are only allowed to appear in the topmost Query.
For the moment, throw an error in such cases. It would probably be
possible to avoid this error by attaching the CTEs to the top INSERT
Query instead; but that would require a bunch of new code to adjust
ctelevelsup references. Given the narrowness of the use-case, and
the need to back-patch this fix, it does not seem worth the trouble
for now. We can revisit this if we get field complaints.
Per report from Greg Nancarrow. Back-patch to all supported branches.
(The test case added here does not fail before v10, but there are
plenty of places checking top-level hasModifyingCTE in 9.6, so I have
no doubt that this code change is necessary there too.)
Greg Nancarrow and Tom Lane
Discussion: https://postgr.es/m/CAJcOf-f68DT=26YAMz_i0+Au3TcLO5oiHY5=fL6Sfuits6r+_w@mail.gmail.com
Discussion: https://postgr.es/m/CAJcOf-fAdj=nDKMsRhQzndm-O13NY4dL6xGcEvdX5Xvbbi0V7g@mail.gmail.com
Commit 01e658fa74 added hash support for row types. This also added
support for hashing anonymous record types, using the same approach
that the type cache uses for comparison support for record types: It
just reports that it works, but it might fail at run time if a
component type doesn't actually support the operation. We get away
with that for comparison because most types support that. But some
types don't support hashing, so the current state can result in
failures at run time where the planner chooses hashing over sorting,
whereas that previously worked if only sorting was an option.
We do, however, want the record hashing support for path tracking in
recursive unions, and the SEARCH and CYCLE clauses built on that. In
that case, hashing is the only plan option. So enable that, this
commit implements the following approach: The type cache does not
report that hashing is available for the record type. This undoes
that part of 01e658fa74. Instead, callers that require hashing no
matter what can override that result themselves. This patch only
touches the callers to make the aforementioned recursive query cases
work, namely the parse analysis of unions, as well as the hash_array()
function.
Reported-by: Sait Talha Nisanci <sait.nisanci@microsoft.com>
Bug: #17158
Discussion: https://www.postgresql.org/message-id/flat/17158-8a2ba823982537a4%40postgresql.org
Updates/Deletes on a relation were allowed even without replica identity
after we define the publication for all tables. This would later lead to
an error on subscribers. The reason was that for such publications we were
not invalidating the relcache and the publication information for
relations was not getting rebuilt. Similarly, we were not invalidating the
relcache after dropping of such publications which will prohibit
Updates/Deletes without replica identity even without any publication.
Author: Vignesh C and Hou Zhijie
Reviewed-by: Hou Zhijie, Kyotaro Horiguchi, Amit Kapila
Backpatch-through: 10, where it was introduced
Discussion: https://postgr.es/m/CALDaNm0pF6zeWqCA8TCe2sDuwFAy8fCqba=nHampCKag-qLixg@mail.gmail.com
Commit 67c33a114 should have set v14's catversion back to what it was
before 3eda9fc09, to avoid forcing a useless pg_upgrade cycle on users
of 14beta3. Do that now.
Discussion: https://postgr.es/m/2598498.1630702074@sss.pgh.pa.us
We make each AIX shared library export all globals found in .o files
that originate in the library. That doesn't include symbols acquired by
-lpgcommon_shlib. That is good on average, but it became a problem for
libpq when commit e6afa8918c461c1dd80c5063a950518fa4e950cd moved five
official libpq API symbols into src/common. Fix this by implementing
the SHLIB_EXPORTS mechanism for AIX, so affected libraries export the
same symbols that they export on Linux. This reintroduces symbols
pg_encoding_to_char, pg_utf_mblen, pg_char_to_encoding,
pg_valid_server_encoding, and pg_valid_server_encoding_id. Back-patch
to v13, where the aforementioned commit first appeared. While a minor
release is usually the wrong time to add or remove symbol exports in
libpq or libecpg, we should expect users to want each documented symbol.
Tony Reix
Discussion: https://postgr.es/m/PR3PR02MB6396742E2FC3E77D37A920BC86C79@PR3PR02MB6396.eurprd02.prod.outlook.com
timetz_zone() delivered completely wrong answers if the zone was
specified by a dynamic TZ abbreviation, because it failed to account
for the difference between the POSIX conventions for field values in
struct pg_tm and the conventions used in PG-specific datetime code.
As a stopgap fix, just adjust the tm_year and tm_mon fields to match
PG conventions. This is fixed in a different way in HEAD (388e71af8)
but I don't want to back-patch the change of reference point.
Discussion: https://postgr.es/m/CAJ7c6TOMG8zSNEZtCn5SPe+cCk3Lfxb71ZaQwT2F4T7PJ_t=KA@mail.gmail.com
Since ea53100d5 (PostgreSQL 12), the shipped pkg-config files have
been broken for statically linking libpq because libpgcommon and
libpgport are missing. This patch adds those two missing private
dependencies (in a non-hardcoded way).
Reported-by: Filip Gospodinov <f@gospodinov.ch>
Discussion: https://www.postgresql.org/message-id/flat/c7108bde-e051-11d5-a234-99beec01ce2a@gospodinov.ch
Attempting to make hashfloat4() look as much as possible like
hashfloat8(), I'd figured I could replace NaNs with get_float4_nan()
before widening to float8. However, results from protosciurus
and topminnow show that on some platforms that produces a different
bit-pattern from get_float8_nan(), breaking the intent of ce773f230.
Rearrange so that we use the result of get_float8_nan() for all NaN
cases. As before, back-patch.
This reverts commit 515e3d84a0b5 and equivalent commits in back
branches. This solution to the problem has a number of problems, so
we'll try again with a different approach.
Per note from Andres Freund
Discussion: https://postgr.es/m/20210831042949.52eqp5xwbxgrfank@alap3.anarazel.de
Replace fixed-length command buffers with psprintf() calls. We didn't
have anything as convenient as psprintf() when this code was written,
but now that we do, there's little reason for the limitation to
stand. Removing it eliminates some corner cases where (for example)
starting the postmaster with a whole lot of options fails.
Most individual file names that pg_ctl deals with are still restricted
to MAXPGPATH, but we've seldom had complaints about that limitation
so long as it only applies to one filename.
Back-patch to all supported branches.
Phil Krylov
Discussion: https://postgr.es/m/567e199c6b97ee19deee600311515b86@krylov.eu
Previously this was allowed, but the collation effectively vanished
into the ether because of the way lookup_collation() works: you could
not use the collation, nor even drop it. Seems better to give an
error up front than to leave the user wondering why it doesn't work.
(Because this test is in DefineCollation not CreateCollation, it does
not prevent pg_import_system_collations from creating ICU collations,
regardless of the initially-chosen encoding.)
Per bug #17170 from Andrew Bille. Back-patch to v10 where ICU support
was added.
Discussion: https://postgr.es/m/17170-95845cf3f0a9c36d@postgresql.org
543f36b43d was too hasty in thinking that the volatility of date_bin()
had to match date_trunc(), since only the latter references
session_timezone.
Bump catversion
Per feedback from Aleksander Alekseev
Backpatch to v14, as the former commit was
Modern POSIX seems to require strtod() to accept "-NaN", but there's
nothing about NaN in SUSv2, and some of our oldest buildfarm members
don't like it. Let's try writing it as -'NaN' instead; that seems
to produce the same result, at least on Intel hardware.
Per buildfarm.
We had a complaint that the postmaster fails to start if the invoking
program closes stdin. That happens because count_usable_fds expects
to be able to dup(0), and if it can't, we conclude there are no free
FDs and go belly-up. So far as I can find, though, there is no other
place in the server that touches stdin, and it's not unreasonable to
expect that a daemon wouldn't use that file.
As a simple improvement, let's dup FD 2 (stderr) instead. Unlike stdin,
it *is* reasonable for us to expect that stderr be open; even if we are
configured not to touch it, common libraries such as libc might try to
write error messages there.
Per gripe from Mario Emmenlauer. Given the lack of previous complaints,
I'm not excited about pushing this into stable branches, but it seems
OK to squeeze it into v14.
Discussion: https://postgr.es/m/48bafc63-c30f-3962-2ded-f2e985d93e86@emmenlauer.de