This change adapts these functions to the machine's address width
without depending on "long" to be the right size. (It isn't on
Win64, for example.) While it seems unlikely anyone would care
to run with a stack depth limit exceeding 2GB, this is part of a
general push to avoid using type "long" to represent memory sizes.
It's convenient to use ssize_t rather than the perhaps-more-obvious
choice of size_t/Size, because the code involved depends on working
with a signed data type. Our MAX_KILOBYTES limit already ensures
that ssize_t will be sufficient to represent the maximum value of
max_stack_depth.
Extracted from a larger patch by Vladlen, plus additional hackery
by me.
Author: Vladlen Popolitov <v.popolitov@postgrespro.ru>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/1a01f0-66ec2d80-3b-68487680@27595217
This commit adds four fields to the statistics of relations, aggregating
the amount of time spent for each operation on a relation:
- total_vacuum_time, for manual vacuum.
- total_autovacuum_time, for vacuum done by the autovacuum daemon.
- total_analyze_time, for manual analyze.
- total_autoanalyze_time, for analyze done by the autovacuum daemon.
This gives users the option to derive the average time spent for these
operations with the help of the related "count" fields.
Bump catalog version (for the catalog changes) and PGSTAT_FILE_FORMAT_ID
(for the additions in PgStat_StatTabEntry).
Author: Sami Imseih
Reviewed-by: Bertrand Drouvot, Michael Paquier
Discussion: https://postgr.es/m/CAA5RZ0uVOGBYmPEeGF2d1B_67tgNjKx_bKDuL+oUftuoz+=Y1g@mail.gmail.com
The right mix of DDL and VACUUM could corrupt a catalog page header such
that PageIsVerified() durably fails, requiring a restore from backup.
This affects only catalogs that both have a syscache and have DDL code
that uses syscache tuples to construct updates. One of the test
permutations shows a variant not yet fixed.
This makes !TransactionIdIsValid(TM_FailureData.xmax) possible with
TM_Deleted. I think core and PGXN are indifferent to that.
Per bug #17821 from Alexander Lakhin. Back-patch to v13 (all supported
versions). The test case is v17+, since it uses INJECTION_POINT.
Discussion: https://postgr.es/m/17821-dd8c334263399284@postgresql.org
The main motivation for this change is to have a process that can serialize
stats after all other processes have terminated. Serializing stats already
happens in checkpointer, even though walsenders can be active longer.
The only reason the current shutdown sequence does not actively cause problems
is that walsender currently does not generate any stats. However, there is an
upcoming patch changing that.
Another need for this change originates in the AIO patchset, where IO
workers (which, in some edge cases, can emit stats of their own) need to run
while the shutdown checkpoint is being written.
This commit changes the shutdown sequence so checkpointer is signalled (via
SIGINT) to trigger writing the shutdown checkpoint without also causing
checkpointer to exit. Once checkpointer wrote the shutdown checkpoint it
notifies postmaster via PMSIGNAL_XLOG_IS_SHUTDOWN and waits for the
termination signal (SIGUSR2, as before). Checkpointer now is terminated after
all children, other than dead-end children and logger, have been terminated,
tracked using the new PM_WAIT_CHECKPOINTER PMState.
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Reviewed-by: Nazir Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://postgr.es/m/kgng5nrvnlv335evmsuvpnh354rw7qyazl73kdysev2cr2v5zu@m3cfzxicm5kp
Useful for caseless matching. Similar to LOWER(), but avoids edge-case
problems with using LOWER() for caseless matching.
For collations that support it, CASEFOLD() handles characters with
more than two case variations or multi-character case variations. Some
characters may fold to uppercase. The results of case folding are also
more stable across Unicode versions than LOWER() or UPPER().
Discussion: https://postgr.es/m/a1886ddfcd8f60cb3e905c93009b646b4cfb74c5.camel%40j-davis.com
Reviewed-by: Ian Lawrence Barwick
Formerly, these cases threw an error "cannot cast jsonb null to type
<whatever>". That seems less than helpful though. It's also
inconsistent with the behavior of the ->> operator, which translates
JSON null to SQL NULL, as do some other jsonb functions.
Discussion: https://postgr.es/m/3851203.1722552717@sss.pgh.pa.us
Most were introduced in the 17 timeframe. The ones in wparser_def.c are
very old.
I also changed "JSON path expression for column \"%s\" should return
single item without wrapper" to "JSON path expression for column \"%s\"
must return single item when no wrapper is requested" to avoid
ambiguity.
Backpatch to 17.
Crickets: https://postgr.es/m/202501131819.26ors7oouafu@alvherre.pgsql
If a referenced UPDATE changes the temporal start/end times, shrinking
the span the row is valid, we get a false return from
ri_Check_Pk_Match(), but overlapping references may still be valid, if
their reference didn't overlap with the removed span.
We need to consider what span(s) are still provided in the referenced
table. Instead of returning that from ri_Check_Pk_Match(), we can
just look it up in the main SQL query.
Reported-by: Sam Gabrielsson <sam@movsom.se>
Author: Paul Jungwirth <pj@illuminatedcomputing.com>
Discussion: https://www.postgresql.org/message-id/flat/CA+renyUApHgSZF9-nd-a0+OPGharLQLO=mDHcY4_qQ0+noCUVg@mail.gmail.com
9aea73fc61 has added support for backend statistics, relying on
PgStat_EntryRef->pending for its data pending for flush. This design
lacks in flexibility, because the pending list does some memory
allocation, making it unsuitable if incrementing counters in critical
sections.
Pending data of backend statistics is reworked so the implementation
does not depend on PgStat_EntryRef->pending anymore, relying on a static
area of memory to store the counters that are flushed when stats are
reported to the pgstats dshash. An advantage of this approach is to
allow the pending data to be manipulated in critical sections; some
patches are under discussion and require that.
The pending data is tracked by PendingBackendStats, local to
pgstat_backend.c. Two routines are introduced to allow IO statistics to
update the backend-side counters. have_static_pending_cb and
flush_static_cb are used for the flush, instead of flush_pending_cb.
Author: Bertrand Drouvot, Michael Paquier
Discussion: https://postgr.es/m/66efowskppsns35v5u2m7k4sdnl7yoz5bo64tdjwq7r5lhplrz@y7dme5xwh2r5
The two callbacks have_fixed_pending_cb and flush_fixed_cb have been
introduced in fc415edf8c to provide a way for fixed-numbered
statistics to control the flush of their data. These are renamed to
respectively have_static_pending_cb and flush_static_cb. The
restriction that these only apply to fixed-numbered stats is removed.
A follow-up patch will make use of them for backend statistics. This
stats kind is variable-numbered, and patches are under discussion to
track WAL data for IO and backend stats which cannot use
PgStat_EntryRef->pending as pending data would be touched in critical
sections, where no memory allocation can happen.
Per discussion with Andres Freund.
Author: Bertrand Drouvot
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/66efowskppsns35v5u2m7k4sdnl7yoz5bo64tdjwq7r5lhplrz@y7dme5xwh2r5
The PG_UNICODE_FAST locale uses code point sort order (fast,
memcmp-based) combined with Unicode character semantics. The character
semantics are based on Unicode full case mapping.
Full case mapping can map a single codepoint to multiple codepoints,
such as "ß" uppercasing to "SS". Additionally, it handles
context-sensitive mappings like the "final sigma", and it uses
titlecase mappings such as "Dž" when titlecasing (rather than plain
uppercase mappings).
Importantly, the uppercasing of "ß" as "SS" is specifically mentioned
by the SQL standard. In Postgres, UCS_BASIC uses plain ASCII semantics
for case mapping and pattern matching, so if we changed it to use the
PG_UNICODE_FAST locale, it would offer better compliance with the
standard. For now, though, do not change the behavior of UCS_BASIC.
Discussion: https://postgr.es/m/ddfd67928818f138f51635712529bc5e1d25e4e7.camel@j-davis.com
Discussion: https://postgr.es/m/27bb0e52-801d-4f73-a0a4-02cfdd4a9ada@eisentraut.org
Reviewed-by: Peter Eisentraut, Daniel Verite
Previously, hex_encode looked up each nibble of the input
separately. We now use a larger lookup table containing the two-byte
encoding of every possible input byte, resulting in a 1/3 reduction
in encoding time.
Reviewed by Tom Lane, Michael Paquier, Nathan Bossart, David Rowley
Discussion: https://postgr.es/m/CANWCAZZvXuJMgqMN4u068Yqa19CEjS31tQKZp_qFFFbgYfaXqQ%40mail.gmail.com
Remove the flex version checks from configure and meson. The cutoff
versions are all so ancient that this is no longer relevant, and what
the actual cutoff should be is a bit fuzzy.
This also removes the ancient behavior that configure would also
accept a "lex" program if it is actuall flex. This aligns the check
with meson in this respect.
For future reference, as of this commit, these are relevant flex
versions:
- The hard required minimum is flex 2.5.34 as of commit b1ef48980d,
but this has not actually been tested.
- Prior to this, the minimum enforced by configure/meson was flex
2.5.35, which is the oldest present in the buildfarm right now.
- As of commit 6fdd5d9563, the oldest version that will compile
without warnings due to flex-generated code is flex 2.5.36.
- The oldest version that probably still has some practical relevance
is flex 2.5.37, which ships with CentOS/RHEL 7.
Discussion: https://www.postgresql.org/message-id/1a204ccd-7ae6-478c-a431-407b5c48ccc6@eisentraut.org
As written, it was triggering a compilation warning for old versions of
clang, as reported by buildfarm members ayu, batfish and demoiselle.
Forcing a cast with "unsigned int" should fix the warning.
While on it, the macro is moved to pgstat.h, closer to the declaration
of IOOp, per suggestion from Tom Lane.
Reported-by: Tom Lane
Reviewed-by: Bertrand Drouvot, Tom Lane, Nazir Bilal Yavuz
Discussion: https://postgr.es/m/1272824.1736961543@sss.pgh.pa.us
If a time zone abbreviation used in datetime input is defined in
the currently active timezone, use that definition in preference
to looking in the timezone_abbreviations list. That allows us to
correctly handle abbreviations that have different meanings in
different timezones. Also, it eliminates an inconsistency between
datetime input and datetime output: the non-ISO datestyles for
timestamptz have always printed abbreviations taken from the IANA
data, not from timezone_abbreviations. Before this fix, it was
possible to demonstrate cases where casting a timestamp to text
and back fails or changes the value significantly because of that
inconsistency.
While this change removes the ability to override the IANA data about
an abbreviation known in the current zone, it's not clear that there's
any real use-case for doing so. But it is clear that this makes life
a lot easier for dealing with abbreviations that have conflicts across
different time zones.
Also update the pg_timezone_abbrevs view to report abbreviations
that are recognized via the IANA data, and *not* report any
timezone_abbreviations entries that are thereby overridden.
Under the hood, there are now two SRFs, one that pulls the IANA
data and one that pulls timezone_abbreviations entries. They're
combined by logic in the view. This approach was useful for
debugging (since the functions can be called on their own).
While I don't intend to document the functions explicitly,
they might be useful to call directly.
Also improve DecodeTimezoneAbbrev's caching logic so that it can
cache zone abbreviations found in the IANA data. Without that,
this patch would have caused a noticeable degradation of the
runtime of timestamptz_in.
Per report from Aleksander Alekseev and additional investigation.
Discussion: https://postgr.es/m/CAJ7c6TOATjJqvhnYsui0=CO5XFMF4dvTGH+skzB--jNhqSQu5g@mail.gmail.com
This allows the RETURNING list of INSERT/UPDATE/DELETE/MERGE queries
to explicitly return old and new values by using the special aliases
"old" and "new", which are automatically added to the query (if not
already defined) while parsing its RETURNING list, allowing things
like:
RETURNING old.colname, new.colname, ...
RETURNING old.*, new.*
Additionally, a new syntax is supported, allowing the names "old" and
"new" to be changed to user-supplied alias names, e.g.:
RETURNING WITH (OLD AS o, NEW AS n) o.colname, n.colname, ...
This is useful when the names "old" and "new" are already defined,
such as inside trigger functions, allowing backwards compatibility to
be maintained -- the interpretation of any existing queries that
happen to already refer to relations called "old" or "new", or use
those as aliases for other relations, is not changed.
For an INSERT, old values will generally be NULL, and for a DELETE,
new values will generally be NULL, but that may change for an INSERT
with an ON CONFLICT ... DO UPDATE clause, or if a query rewrite rule
changes the command type. Therefore, we put no restrictions on the use
of old and new in any DML queries.
Dean Rasheed, reviewed by Jian He and Jeff Davis.
Discussion: https://postgr.es/m/CAEZATCWx0J0-v=Qjc6gXzR=KtsdvAE7Ow=D=mu50AgOe+pvisQ@mail.gmail.com
The "does not exist" error in object_aclmask_ext() was written as
ereport(), suggesting that it is user-facing. This is problematic:
get_object_class_descr() is meant to be for internal errors only and
does not support translation.
For the has_xxx_privilege functions, the error has not been
user-facing since commit 403ac226dd. The remaining users are
pg_database_size() and pg_tablespace_size(). The call stack here is
pretty deep and this dependency is not obvious. Here we can put in an
explicit existence check with a bespoke error message early in the
function.
Then we can downgrade the error in object_aclmask_ext() to a normal
"cache lookup failed" internal error.
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://www.postgresql.org/message-id/flat/da2f8942-be6d-48d0-ac1c-a053370a6b1f@eisentraut.org
RowCompareType served as a way to describe the fundamental meaning of
an operator, notionally independent of an operator class (although so
far this was only really supported for btrees). Its original purpose
was for use inside RowCompareExpr, and it has also found some small
use outside, such as for get_op_btree_interpretation().
We want to expand this now, as a more general way to describe operator
semantics for other index access methods, including gist (to improve
GistTranslateStratnum()) and others not written yet. To avoid future
confusion, we rename the type to CompareType and the symbols from
ROWCOMPARE_XXX to COMPARE_XXX to reflect their more general purpose.
Reviewed-by: Mark Dilger <mark.dilger@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/E72EAA49-354D-4C2E-8EB9-255197F55330@enterprisedb.com
ca9c6a5680 consolidated most of the vacuum-related GUCs' documentation
into a new subsection. af2317652d then enforced this order in
postgresql.conf.sample. This commit reorganizes the GUC groups in
guc_tables.c/h to match the updated ordering in the docs.
Reported-by: Álvaro Herrera
Reviewed-by: Álvaro Herrera, Alena Rybakina
Discussion: https://postgr.es/m/202501132046.m4mcvxxswznu%40alvherre.pgsql
If a new catalog tuple is inserted that belongs to a catcache list
entry, and cache invalidation happens while the list entry is being
built, the list entry might miss the newly inserted tuple.
To fix, change the way we detect concurrent invalidations while a
catcache entry is being built. Keep a stack of entries that are being
built, and apply cache invalidation to those entries in addition to
the real catcache entries. This is similar to the in-progress list in
relcache.c.
Back-patch to all supported versions.
Reviewed-by: Noah Misch
Discussion: https://www.postgresql.org/message-id/2234dc98-06fe-42ed-b5db-ac17384dc880@iki.fi
Currently in pg_stat_io view, IOs are counted as blocks of size
BLCKSZ. There are two limitations with this design:
* The actual number of I/O requests sent to the kernel is lower because
I/O requests may be merged before being sent. Additionally, it gives
the impression that all I/Os are done in block size, which shadows the
benefits of merging I/O requests.
* Some patches are under work to extend pg_stat_io for the tracking of
operations that may not be linked to the block size. For example, WAL
read IOs are done in variable bytes and it is not possible to correctly
show these IOs in pg_stat_io view, and we want to keep all this data in
a single system view rather than spread it across multiple relations to
ease monitoring.
WaitReadBuffers() can now be tracked as a single read operation
worth N blocks. Same for ExtendBufferedRelShared() and
ExtendBufferedRelLocal() for extensions.
Three columns are added to pg_stat_io for reads, writes and extensions
for the byte calculations. op_bytes, which was always hardcoded to
BLCKSZ, is removed. IO backend statistics are updated to reflect these
changes.
Bump catalog version.
Author: Nazir Bilal Yavuz
Reviewed-by: Bertrand Drouvot, Melanie Plageman
Discussion: https://postgr.es/m/CAN55FZ0oqxBaaHAEsj=xFqkzE3n5P=3RA1V_igXwL-RV7QRzyw@mail.gmail.com
ca9c6a5680 consolidated most of vacuum-related GUCs' documentation into
a new subsection. It neglected, however, to reorganize
postgresql.conf.sample to match the new order. Do this now.
Reported-by: Álvaro Herrera
Discussion: https://postgr.es/m/202501110902.5banlseavz7c%40alvherre.pgsql
Commit 27a1f8d108 missed updating the max HBA option count to
account for the new option added. Fix by bumping the counter
and adjust the relevant comment to match. Backpatch down to
all supported branches like the erroneous commit.
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/286764.1736697356@sss.pgh.pa.us
Backpatch-through: v13
When deparsing a JsonExpr, variable names in the PASSING clause were
not quoted. However, since they are parsed as ColLabel tokens, some
variable names require double quotes to ensure that they are properly
interpreted. Fix by using quote_identifier() in the deparsing code.
This oversight was limited to the SQL/JSON query functions
JSON_EXISTS(), JSON_QUERY(), and JSON_VALUE().
Back-patch to v17, where these functions were added.
Dean Rasheed, reviewed by Tom Lane.
Discussion: https://postgr.es/m/CAEZATCXTpAS%3DncfLNTZ7YS6O5puHeLg_SUYAit%2Bcs7wsrd9Msg%40mail.gmail.com
When deparsing an XMLTABLE() expression, XML namespace names were not
quoted. However, since they are parsed as ColLabel tokens, some names
require double quotes to ensure that they are properly interpreted.
Fix by using quote_identifier() in the deparsing code.
Back-patch to all supported versions.
Dean Rasheed, reviewed by Tom Lane.
Discussion: https://postgr.es/m/CAEZATCXTpAS%3DncfLNTZ7YS6O5puHeLg_SUYAit%2Bcs7wsrd9Msg%40mail.gmail.com
The pgstat_count_io_op() function, which counts a single I/O operation,
wraps pgstat_count_io_op_n() with a counter value of 1. The latter is
declared in pgstat.h and used nowhere in the code, so let's remove it in
favor of the former.
This change makes also the code more symmetric with
pgstat_count_io_op_time(), that already uses a similar set of arguments,
except that it counts also the I/O time. This will ease a bit the
integration of a follow-up patch that adds byte-level tracking in
pg_stat_io for some of its attributes, lifting the current restriction
based on BLCKSZ as all I/O operations are assumed to be block-based.
Author: Nazir Bilal Yavuz
Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/CAN55FZ32ze812=yjyZg1QeXhKvACUM_Nu0_gyPQcUKKuVHL5xA@mail.gmail.com
This commit changes the way pending backend statistics are tracked by
moving them into a new structure called PgStat_BackendPending, removing
PgStat_BackendPendingIO. PgStat_BackendPending currently only includes
PgStat_PendingIO for the pending I/O stats.
pgstat_flush_backend() is extended with a "flags" argument to control
which parts of the stats of a backend should be flushed.
With this refactoring, it becomes easier to plug into backend statistics
more data. A patch to add information related to WAL in this stats kind
is under discussion.
Author: Bertrand Drouvot
Discussion: https://postgr.es/m/Z3zqc4o09dM/Ezyz@ip-10-97-1-34.eu-west-3.compute.internal
This commit introduces a new parameter named
autovacuum_worker_slots that controls how many autovacuum worker
slots to reserve during server startup. Modifying this new
parameter's value does require a server restart, but it should
typically be set to the upper bound of what you might realistically
need to set autovacuum_max_workers. With that new parameter in
place, autovacuum_max_workers can now be changed with a SIGHUP
(e.g., pg_ctl reload).
If autovacuum_max_workers is set higher than
autovacuum_worker_slots, a WARNING is emitted, and the server will
only start up to autovacuum_worker_slots workers at a given time.
If autovacuum_max_workers is set to a value less than the number of
currently-running autovacuum workers, the existing workers will
continue running, but no new workers will be started until the
number of running autovacuum workers drops below
autovacuum_max_workers.
Reviewed-by: Sami Imseih, Justin Pryzby, Robert Haas, Andres Freund, Yogesh Sharma
Discussion: https://postgr.es/m/20240410212344.GA1824549%40nathanxps13
Slightly faulty logic in the original jsonb code (commit d9134d0a35)
results in an empty top level array sorting less than a json null. We
can't change the sort order now since it would affect btree indexes over
jsonb, so document the anomaly.
Backpatch to all live branches (13 .. 17)
In master, also add a code comment noting the anomaly.
Reported-by: Yan Chengpen
Reviewed-by: Jian He
Discussion: https://postgr.es/m/OSBPR01MB45199DD8DA2D1CECD50518188E272@OSBPR01MB4519.jpnprd01.prod.outlook.com
When looking up statistical data about an expression, we do not need
to concern ourselves with the outer joins that could null the
Vars/PHVs contained in the expression. Accounting for nullingrels in
the expression could cause estimate_num_groups to count the same Var
multiple times if it's marked with different nullingrels. This is
incorrect, and could lead to "ERROR: corrupt MVNDistinct entry" when
searching for multivariate n-distinct.
Furthermore, the nullingrels could prevent us from matching an
expression to expressional index columns or to the expressions in
extended statistics, leading to inaccurate estimates.
To fix, strip out all the nullingrels from the expression before we
look up statistical data about it. There is one ensuing plan change
in the regression tests, but it looks reasonable and does not
compromise its original purpose.
This patch could result in plan changes, but it fixes an actual bug,
so back-patch to v16 where the outer-join-aware-Var infrastructure was
introduced.
Author: Richard Guo
Discussion: https://postgr.es/m/CAMbWs4-2Z4k+nFTiZe0Qbu5n8juUWenDAtMzi98bAZQtwHx0-w@mail.gmail.com
Cause parallel workers to not check datallowconn, rolcanlogin, and
ACL_CONNECT privileges. The leader already checked these things
(except for rolcanlogin which might have been checked for a different
role). Re-checking can accomplish little except to induce unexpected
failures in applications that might not even be aware that their query
has been parallelized. We already had the principle that parallel
workers rely on their leader to pass a valid set of authorization
information, so this change just extends that a bit further.
Also, modify the ReservedConnections, datconnlimit and rolconnlimit
logic so that these limits are only enforced against regular backends,
and only regular backends are counted while checking if the limits
were already reached. Previously, background processes that had an
assigned database or role were subject to these limits (with rather
random exclusions for autovac workers and walsenders), and the set of
existing processes that counted against each limit was quite haphazard
as well. The point of these limits, AFAICS, is to ensure the
availability of PGPROC slots for regular backends. Since all other
types of processes have their own separate pools of PGPROC slots, it
makes no sense either to enforce these limits against them or to count
them while enforcing the limit.
While edge-case failures of these sorts have been possible for a
long time, the problem got a good deal worse with commit 5a2fed911
(CVE-2024-10978), which caused parallel workers to make some of these
checks using the leader's current role where before we had used its
AuthenticatedUserId, thus allowing parallel queries to fail after
SET ROLE. The previous behavior was fairly accidental and I have
no desire to return to it.
This patch includes reverting 73c9f91a1, which was an emergency hack
to suppress these same checks in some cases. It wasn't complete,
as shown by a recent bug report from Laurenz Albe. We can also revert
fd4d93d26 and 492217301, which hacked around the same problems in one
regression test.
In passing, remove the special case for autovac workers in
CheckMyDatabase; it seems cleaner to have AutoVacWorkerMain pass
the INIT_PG_OVERRIDE_ALLOW_CONNS flag, now that that does what's
needed.
Like 5a2fed911, back-patch to supported branches (which sadly no
longer includes v12).
Discussion: https://postgr.es/m/1808397.1735156190@sss.pgh.pa.us
The need for this was missed in commit 93db6cbda, with the result
being that if we launch a slotsync worker it would consume one of
the PGPROCs in the max_connections pool. That could lead to inability
to launch the worker, or to subsequent failures of connection requests
that should have succeeded according to the configured settings.
Rather than create some one-off infrastructure to support this,
let's group the slotsync worker with the existing autovac launcher
in a new category of "special worker" processes. These are kind of
like auxiliary processes, but they cannot use that infrastructure
because they need to be able to run transactions.
For the moment, make these processes share the PGPROC freelist
used for autovac workers (which previously supplied the autovac
launcher too). This is partly to avoid an ABI change in v17,
and partly because it seems silly to have a freelist with
at most two members. This might be worth revisiting if we grow
enough workers in this category.
Tom Lane and Hou Zhijie. Back-patch to v17.
Discussion: https://postgr.es/m/1808397.1735156190@sss.pgh.pa.us
Previously, if an infinite value was passed to date_trunc(), then the
same infinite value would always be returned regardless of the field
unit given by the caller. This commit updates the function so that an
error is returned when an invalid unit is passed to date_trunc() with an
infinite value.
This matches the behavior of date_trunc() with a finite value and
date_part() with an infinite value, making the handling of interval,
timestamp and timestamptz more consistent across the board for these two
functions.
Some tests are added to cover all these new failure cases, with an
unsupported unit and infinite values for the three data types. There
were no test cases in core that checked all these patterns up to now.
Author: Joseph Koshakow
Discussion: https://postgr.es/m/CAAvxfHc4084dGzEJR0_pBZkDuqbPGc5wn7gK_M0XR_kRiCdUJQ@mail.gmail.com
Trying to clean up the code a bit while we're working on these files
for the reentrant scanner/pure parser patches. This cleanup only
touches the code sections after the second '%%' in each file, via a
manually-supervised and locally hacked up pgindent.