4ba012a8ed defined the "header" (pointer to the stats data) of
from_serialized_data() as a const, even though it is fine (and
expected!) for the callback to modify the shared memory entry when
loading the stats at startup.
While on it, this commit updates the callback to_serialized_data() in
the test module test_custom_stats to make the data extracted from the
"header" parameter a const since it should never be modified: the stats
are written to disk and no modifications are expected in the shared
memory entry.
This clarifies the API contract of these new callbacks.
Reported-By: Peter Eisentraut <peter@eisentraut.org>
Author: Michael Paquier <michael@paquier.xyz>
Co-authored-by: Sami Imseih <samimseih@gmail.com>
Discussion: https://postgr.es/m/d87a93b0-19c7-4db6-b9c0-d6827e7b2da1@eisentraut.org
Commit e0f373ee4 fixed up races in Cluster::connect_fails when using
log_like. t/002_client.pl didn't get the memo, though, because it
doesn't use Test::Cluster to perform its custom hook tests. (This is
probably not an issue at the moment, since the log check is only done
after authentication success and not failure, but there's no reason to
wait for someone to hit it.)
Introduce the fix, based on debug2 logging, to its use of log_check() as
well, and move the logic into the test() helper so that any additions
don't need to continually duplicate it.
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/CAOYmi%2BmrGg%2Bn_X2MOLgeWcj3v_M00gR8uz_D7mM8z%3DdX1JYVbg%40mail.gmail.com
Backpatch-through: 18
The definition of PGoauthBearerRequest uses a temporary SOCKTYPE macro
to hide the difference between Windows and Berkeley socket handles,
since we don't surface pgsocket in our public API. This macro doesn't
need to escape the header, because implementers will choose the correct
socket type based on their platform, so I #undef'd it immediately after
use.
I didn't namespace that helper, though, so if anyone else needs a
SOCKTYPE macro, libpq-fe.h will now unhelpfully get rid of it. This
doesn't seem too far-fetched, given its proximity to existing POSIX
macro names.
Add a PQ_ prefix to avoid collisions, update and improve the surrounding
documentation, and backpatch.
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/CAOYmi%2BmrGg%2Bn_X2MOLgeWcj3v_M00gR8uz_D7mM8z%3DdX1JYVbg%40mail.gmail.com
Backpatch-through: 18
I originally used just "regress-VERSION.mo", but that seems too
generic considering that some packagers will put this file into
a system-wide directory. Per suggestion from Christoph Berg.
Reported-by: Christoph Berg <myon@debian.org>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/aULSW7Xqx5MqDW_1@msg.df7cb.de
The test is very sensitive to how backends start and exit, because it
tests dead-end backends which occur when all the connection slots are
in use. The test failed occasionally in the CI, when the backend that
was launched for the raw_connect_works() check lingered for a while,
and exited only later during the test. When it exited, it released a
connection slot, when the test expected all the slots to be in use at
that time.
The 002_connection_limits.pl test had a similar issue: if the backend
launched for safe_psql() in the test initialization lingers around, it
uses up a connection slot during the test, messing up the test's
connection counting. I haven't seen that in the CI, but when I added a
"sleep(1);" to proc_exit(), the test failed.
To make the tests more robust, restart the server to ensure that the
lingering backends doesn't interfere with the later test steps.
In the passing, fix a bogus test name.
Report and analysis by Jelte Fennema-Nio, Andres Freund, Thomas Munro.
Discussion: https://www.postgresql.org/message-id/CAGECzQSU2iGuocuP+fmu89hmBmR3tb-TNyYKjCcL2M_zTCkAFw@mail.gmail.com
Backpatch-through: 18
Allow pg_createsubscriber to reuse existing publications instead of
failing when they already exist on the publisher.
Previously, pg_createsubscriber would fail if any specified publication
already existed. Now, existing publications are reused as-is with their
current configuration, and non-existing publications are created
automatically with FOR ALL TABLES.
This change provides flexibility when working with mixed scenarios of
existing and new publications. Users should verify that existing
publications have the desired configuration before reusing them, and can
use --dry-run with verbose mode to see which publications will be reused
and which will be created.
Only publications created by pg_createsubscriber are cleaned up during
error cleanup operations. Pre-existing publications are preserved unless
'--clean=publications' is explicitly specified, which drops all
publications.
This feature would be helpful for pub-sub configurations where users want
to subscribe to a subset of tables from the publisher.
Author: Shubham Khanna <khannashubham1197@gmail.com>
Reviewed-by: Euler Taveira <euler@eulerto.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Zhijie Hou (Fujitsu) <houzj.fnst@fujitsu.com
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: vignesh C <vignesh21@gmail.com>
Reviewed-by: tianbing <tian_bing_0531@163.com>
Discussion: https://postgr.es/m/CAHv8Rj%2BsxWutv10WiDEAPZnygaCbuY2RqiLMj2aRMH-H3iZwyA%40mail.gmail.com
This change makes pgstat_report_vacuum() more consistent with
pgstat_report_analyze(), that also uses a Relation. This enforces a
policy that callers of this routine should open and lock the relation
whose statistics are updated before calling this routine. We will
unlikely have a lot of callers of this routine in the tree, but it seems
like a good idea to imply this requirement in the long run.
Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Suggested-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/aUEA6UZZkDCQFgSA@ip-10-97-1-34.eu-west-3.compute.internal
ICU still depends on libc for compatibility with certain historical
behavior for single-byte encodings. Make the dependency explicit by
holding a locale_t object when required.
We should consider a better solution in the future, such as decoding
the text to UTF-32 and using u_tolower(). That would be a behavior
change and require additional infrastructure though; so for now, just
avoid the global LC_CTYPE dependency.
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/450ceb6260cad30d7afdf155d991a9caafee7c0d.camel@j-davis.com
Previously, libc's tolower() was always used for lowercasing
identifiers, regardless of the database locale (though only characters
beyond 127 in single-byte encodings were affected). Refactor to allow
each provider to supply its own implementation of identifier
downcasing.
For historical compatibility, when using a single-byte encoding, ICU
still relies on tolower().
One minor behavior change is that, before the database default locale
is initialized, it uses ASCII semantics to downcase the
identifiers. Previously, it would use the postmaster's LC_CTYPE
setting from the environment. While that could have some effect during
GUC processing, for example, it would have been fragile to rely on the
environment setting anyway. (Also, it only matters when the encoding
is single-byte.)
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/450ceb6260cad30d7afdf155d991a9caafee7c0d.camel@j-davis.com
We already do this in CreateParallelContext, InitializeParallelDSM, and
LaunchParallelWorkers. I suspect the reason why the matching logic was
omitted from ReinitializeParallelDSM is that I failed to realize that
any memory allocation was happening here -- but shm_mq_attach does
allocate, which could result in a shm_mq_handle being allocated in a
shorter-lived context than the ParallelContext which points to it.
That could result in a crash if the shorter-lived context is freed
before the parallel context is destroyed. As far as I am currently
aware, there is no way to reach a crash using only code that is
present in core PostgreSQL, but extensions could potentially trip
over this. Fixing this in the back-branches appears low-risk, so
back-patch to all supported versions.
Author: Jakub Wartak <jakub.wartak@enterprisedb.com>
Co-authored-by: Jeevan Chalke <jeevan.chalke@enterprisedb.com>
Backpatch-through: 14
Discussion: http://postgr.es/m/CAKZiRmwfVripa3FGo06=5D1EddpsLu9JY2iJOTgbsxUQ339ogQ@mail.gmail.com
Further research shows that the reason commit 7db6809ce failed
is that recent glibc versions short-circuit translation attempts
when LC_MESSAGES is 'C.<encoding>', not only when it's 'C'.
There seems no way around that, so we'll have to live with only
testing NLS when a suitable real locale is installed.
However, something can still be salvaged: it still seems like a
good idea to verify that the PRI* macros work as-expected even when
we can't check their translations (see f8715ec86 for motivation).
Hence, adjust the test to always run the ereport calls, and tweak
the parameter values in hopes of detecting any cases where there's
confusion about the actual widths of the parameters.
Discussion: https://postgr.es/m/1991599.1765818338@sss.pgh.pa.us
In the wake of commit b45242fd3, bytea_sortsupport() still called out
to varstr_sortsupport(). Treating bytea as a kind of text/varchar
required varstr_sortsupport() to allow for the possibility of
NUL bytes, but only for C collation. This was confusing. For
better separation of concerns, create an independent sortsupport
implementation in bytea.c.
The heuristics for bytea_abbrev_abort() remain the same as for
varstr_abbrev_abort(). It's possible that the bytea case warrants
different treatment, but that is left for future investigation.
In passing, adjust some strange looking comparisons in
varstr_abbrev_abort().
Author: Aleksander Alekseev <aleksander@tigerdata.com>
Reviewed-by: John Naylor <johncnaylorls@gmail.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/CAJ7c6TP1bAbEhUJa6+rgceN6QJWMSsxhg1=mqfSN=Nb-n6DAKg@mail.gmail.com
This commit provides test coverage for dc7c77f825, where the redo
record and the checkpoint record finish on different WAL segments with
the start of recovery able to detect that the redo record is missing.
This test uses a wait injection point done in the critical section of a
checkpoint, method that requires not one but actually two wait injection
points to avoid any memory allocations within the critical section of
the checkpoint:
- Checkpoint run with a background psql.
- One first wait point is run by the checkpointer before the critical
section, allocating the shared memory required by the DSM registry for
the wait machinery in the library injection_points.
- First point is woken up.
- Second wait point is loaded before the critical section, allocating
the memory to build the path to the library loaded, then run in the
critical section once the checkpoint redo record has been logged.
- WAL segment is switched while waiting on the second point.
- Checkpoint completes.
- Stop cluster with immediate mode.
- The segment that includes the redo record is removed.
- Start, recovery fails as the redo record cannot be found.
The error message introduced in dc7c77f825 is now reduced to a FATAL,
meaning that the information is still provided while being able to use a
test for it. Nitin has provided a basic version of the test, that I
have enhanced to make it portable with two points. Without
dc7c77f825, the cluster crashes in this test, not on a PANIC but due
to the pointer dereference at the beginning of recovery, failure
mentioned in the other commit.
Author: Nitin Jadhav <nitinjadhavpostgres@gmail.com>
Co-authored-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/CAMm1aWaaJi2w49c0RiaDBfhdCL6ztbr9m=daGqiOuVdizYWYaA@mail.gmail.com
This commit adds an extra check at the beginning of recovery to ensure
that the redo record of a checkpoint exists before attempting WAL
replay, logging a PANIC if the redo record referenced by the checkpoint
record could not be found. This is the same level of failure as when a
checkpoint record is missing. This check is added when a cluster is
started without a backup_label, after retrieving its checkpoint record.
The redo LSN used for the check is retrieved from the checkpoint record
successfully read.
In the case where a backup_label exists, the startup process already
fails if the redo record cannot be found after reading a checkpoint
record at the beginning of recovery.
Previously, the presence of the redo record was not checked. If the
redo and checkpoint records were located on different WAL segments, it
would be possible to miss a entire range of WAL records that should have
been replayed but were just ignored. The consequences of missing the
redo record depend on the version dealt with, these becoming worse the
older the version used:
- On HEAD, v18 and v17, recovery fails with a pointer dereference at the
beginning of the redo loop, as the redo record is expected but cannot be
found. These versions are good students, because we detect a failure
before doing anything, even if the failure is misleading in the shape of
a segmentation fault, giving no information that the redo record is
missing.
- In v16 and v15, problems show at the end of recovery within
FinishWalRecovery(), the startup process using a buggy LSN to decide
from where to start writing WAL. The cluster gets corrupted, still it
is noisy about it.
- v14 and older versions are worse: a cluster gets corrupted but it is
entirely silent about the matter. The redo record missing causes the
startup process to skip entirely recovery, because a missing record is
the same as not redo being required at all. This leads to data loss, as
everything is missed between the redo record and the checkpoint record.
Note that I have tested that down to 9.4, reproducing the issue with a
version of the author's reproducer slightly modified. The code is wrong
since at least 9.2, but I did not look at the exact point of origin.
This problem has been found by debugging a cluster where the WAL segment
including the redo segment was missing due to an operator error, leading
to a crash, based on an investigation in v15.
Requesting archive recovery with the creation of a recovery.signal or
a standby.signal even without a backup_label would mitigate the issue:
if the record cannot be found in pg_wal/, the missing segment can be
retrieved with a restore_command when checking that the redo record
exists. This was already the case without this commit, where recovery
would re-fetch the WAL segment that includes the redo record. The check
introduced by this commit makes the segment to be retrieved earlier to
make sure that the redo record can be found.
On HEAD, the code will be slightly changed in a follow-up commit to not
rely on a PANIC, to include a test able to emulate the original problem.
This is a minimal backpatchable fix, kept separated for clarity.
Reported-by: Andres Freund <andres@anarazel.de>
Analyzed-by: Andres Freund <andres@anarazel.de>
Author: Nitin Jadhav <nitinjadhavpostgres@gmail.com>
Discussion: https://postgr.es/m/20231023232145.cmqe73stvivsmlhs@awork3.anarazel.de
Discussion: https://postgr.es/m/CAMm1aWaaJi2w49c0RiaDBfhdCL6ztbr9m=daGqiOuVdizYWYaA@mail.gmail.com
Backpatch-through: 14
This reverts commit 7db6809ced.
That doesn't seem to work with recent (last couple of years)
glibc, and the reasons are obscure. I can't let the farm stay
this broken for long.
Now that the prior commits have fixed missing OAuth translations, pull
the bespoke usage of libpq_gettext() for OAUTHBEARER parsing into
oauth_json_set_error() itself, and make that a gettext trigger as well,
to better match what the other sites are doing. Add an _internal()
variant to handle the existing untranslated case.
Suggested-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Discussion: https://postgr.es/m/0EEBCAA8-A5AC-4E3B-BABA-0BA7A08C361B%40yesql.se
Backpatch-through: 18
Some error messages are generated when OAuth multiplexer operations fail
unexpectedly in the client. Álvaro pointed out that these are both
difficult to translate idiomatically (as they use internal terminology
heavily) and of dubious translation value to end users (since they're
going to need to get developer help anyway). The response parsing engine
has a similar issue.
Remove these from the translation files by introducing internal variants
of actx_error() and oauth_parse_set_error().
Suggested-by: Álvaro Herrera <alvherre@kurilemu.de>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Discussion: https://postgr.es/m/CAOYmi%2BkQQ8vpRcoSrA5EQ98Wa3G6jFj1yRHs6mh1V7ohkTC7JA%40mail.gmail.com
Backpatch-through: 18
Several strings that should have been translated as they passed through
libpq_gettext were not actually being pulled into the translation files,
because I hadn't directly wrapped them in one of the GETTEXT_TRIGGERS.
Move the responsibility for calling libpq_gettext() to the code that
sets actx->errctx. Doing the same in report_type_mismatch() would result
in double-translation, so mark those strings with gettext_noop()
instead. And wrap two ternary operands with gettext_noop(), even though
they're already in one of the triggers, since xgettext sees only the
first.
Finally, fe-auth-oauth.c was missing from nls.mk, so none of that file
was being translated at all. Add it now.
Original patch by Zhijie Hou, plus suggested tweaks by Álvaro Herrera
and small additions by me.
Reported-by: Zhijie Hou <houzj.fnst@fujitsu.com>
Author: Zhijie Hou <houzj.fnst@fujitsu.com>
Co-authored-by: Álvaro Herrera <alvherre@kurilemu.de>
Co-authored-by: Jacob Champion <jacob.champion@enterprisedb.com>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Discussion: https://postgr.es/m/TY4PR01MB1690746DB91991D1E9A47F57E94CDA%40TY4PR01MB16907.jpnprd01.prod.outlook.com
Backpatch-through: 18
I had supposed that the majority of machines with gettext installed
would have most language locales installed, but at least in the
buildfarm it turns out less than half have es_ES installed. So
depending on that to run the test now seems like a bad idea. But it
turns out that gettext can be persuaded to "translate" even in the C
locale, as long as you fake out its short-circuit logic by spelling
the locale name like "C.UTF-8" or similar. (Many thanks to Bryan
Green for correcting my misconceptions about that.) Quick testing
suggests that that spelling is accepted by most platforms, though
again the buildfarm may show that "most" isn't "all".
Hence, remove the es_ES dependency and instead create a "C" message
catalog. I've made the test unconditionally set lc_messages to
'C.UTF-8'. That approach might need adjustment depending on what
the buildfarm shows, but let's keep it simple until proven wrong.
While at it, tweak the test so that we run the various ereport's
even when !ENABLE_NLS. This is useful to verify that the macros
provided by <inttypes.h> are compatible with snprintf.c, as we
now know is worth questioning.
Discussion: https://postgr.es/m/1991599.1765818338@sss.pgh.pa.us
Late-model gcc with -fsanitize=undefined enabled issues warnings
about uses of PageGetItemId() when it can't prove that the
offsetNumber is > 0. The call sites where this happens are
checking that the offnum is <= PageGetMaxOffsetNumber(page), so
it seems reasonable to add an explicit check that offnum >= 1 too.
While at it, rearrange the code to be less contorted and avoid
duplicate checks on PageGetMaxOffsetNumber. Maybe the compiler
would optimize away the duplicate logic or maybe not, but the
existing coding has little to recommend it anyway.
There are multiple instances of this identical coding pattern in
heapam.c and heapam_xlog.c. Current gcc only complains about two
of them, but I fixed them all in the name of consistency.
Potentially this could be back-patched in the name of silencing
warnings; but I think enabling UBSAN is mainly something people
would do on HEAD, so for now it seems not worth the trouble.
Discussion: https://postgr.es/m/1699806.1765746897@sss.pgh.pa.us
The workload to generate multixids before upgrade is very slow on
buildfarm members running with JIT enabled. The workload runs a lot of
small queries, so it's unsurprising that JIT makes it slower. On my
laptop it nevertheless runs in under 10 s even with JIT enabled, while
some buildfarm members have been hitting the 180 s timeout. That seems
extreme, but I suppose it's still expected on very slow and busy
buildfarm animals. The timeout applies to the BackgroundPsql sessions
as whole rather than the individual queries.
Bump up the timeout to avoid the test failures. Add periodic progress
reports to the test output so that we get a better picture of just how
slow the test is.
In the passing, also fix comments about how many multixids and members
the workload generates. The comments were written based on 10 parallel
connections, but it actually uses 20.
Discussion: https://www.postgresql.org/message-id/b7faf07c-7d2c-4f35-8c43-392e057153ef@gmail.com
In the server, check explicitly for multixids with zero members. We
used to have an assertion for it, but commit d4b7bde418 replaced it
with more extensive runtime checks, but it missed the original case of
zero members.
In the upgrade code, a negative length never makes sense, so better
check for it explicitly. Commit d4b7bde418 added a similar sanity
check to the corresponding server code on master, and in backbranches,
the 'length' is passed to palloc which would fail with "invalid memory
alloc request size" error. Clarify the comments on what kind of
invalid entries are tolerated by the upgrade code and which ones are
reported as fatal errors.
Coverity complained about 'length' in the upgrade code being
tainted. That's bogus because we trust the data on disk at least to
some extent, but hopefully this will silence the complaint. If not,
I'll dismiss it manually.
Discussion: https://www.postgresql.org/message-id/7b505284-c6e9-4c80-a7ee-816493170abc@iki.fi
We have tried to stabilize them several times already, but they are very
flaky -- apparently there's some intrinsic instability that's hard to
solve with the isolationtester framework. They are very noisy in CI
runs (whereas buildfarm has not registered any such failures). They may
need to be rewritten completely. In the meantime just comment them out
in Makefile/meson.build, leaving the spec files around.
Per complaint from Andres Freund.
Discussion: https://postgr.es/m/202512112014.icpomgc37zx4@alvherre.pgsql
HAVE__STATIC_ASSERT was really a test for GCC statement expressions,
as needed for StaticAssertExpr() now that _Static_assert could be
assumed to be available through our C11 requirement. This
artificially prevented Visual Studio from being able to use
static_assert() in other contexts.
Instead, make a new test for HAVE_STATEMENT_EXPRESSIONS, and use that
to control only whether StaticAssertExpr() uses fallback code, not the
other variants. This improves the quality of failure messages in the
(much more common) other variants under Visual Studio.
Also get rid of the two separate implementations for C++, since the C
implementation is also also valid as C++11. While it is a stretch to
apply HAVE_STATEMENT_EXPRESSIONS tested with $CC to a C++ compiler,
the previous C++ coding assumed that the C++ compiler had them
unconditionally, so it isn't a new stretch. In practice, the C and
C++ compilers are very likely to agree, and if a combination is ever
reported that falsifies this assumption we can always reconsider that.
Author: Thomas Munro <thomas.munro@gmail.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/CA%2BhUKGKvr0x_oGmQTUkx%3DODgSksT2EtgCA6LmGx_jQFG%3DsDUpg%40mail.gmail.com
Coverity complained that offset cannot be 0 here because there's an
explicit check for "offset == 0" earlier in the function, but it
didn't see the possibility that offset could've wrapped around to 0.
The code is correct, but clarify the comment about it.
The same code exists in backbranches in the server
GetMultiXactIdMembers() function and in 'master' in the pg_upgrade
GetOldMultiXactIdSingleMember function. In backbranches Coverity
didn't complain about it because the check was merely an assertion,
but change the comment in all supported branches for consistency.
Per Tom Lane's suggestion.
Discussion: https://www.postgresql.org/message-id/1827755.1765752936@sss.pgh.pa.us
Previously, pg_sync_replication_slots() would finish without synchronizing
slots that didn't meet requirements, rather than failing outright. This
could leave some failover slots unsynchronized if required catalog rows or
WAL segments were missing or at risk of removal, while the standby
continued removing needed data.
To address this, the function now waits for the primary slot to advance to
a position where all required data is available on the standby before
completing synchronization. It retries cyclically until all failover slots
that existed on the primary at the start of the call are synchronized.
Slots created after the function begins are not included. If the standby
is promoted during this wait, the function exits gracefully and the
temporary slots will be removed.
Author: Ajin Cherian <itsajin@gmail.com>
Author: Hou Zhijie <houzj.fnst@fujitsu.com>
Reviewed-by: Shveta Malik <shveta.malik@gmail.com>
Reviewed-by: Japin Li <japinli@hotmail.com>
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Reviewed-by: Ashutosh Sharma <ashu.coek88@gmail.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Yilin Zhang <jiezhilove@126.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Discussion: https://postgr.es/m/CAFPTHDZAA%2BgWDntpa5ucqKKba41%3DtXmoXqN3q4rpjO9cdxgQrw%40mail.gmail.com
I have fat-fingered an error message related to an offset while
switching the code to use pgoff_t. Let's switch to the same error
message used in the rest of the tree for similar failures with fseeko(),
instead.
Per buildfarm members running macos: longfin, sifaka and indri.
Cumulative stats kinds gain the capability to write additional per-entry
data when flushing the stats at shutdown, and read this data when
loading back the stats at startup. This can be fit for example in the
case of variable-length data (like normalized query strings), so as it
becomes possible to link the shared memory stats entries to data that is
stored in a different area, like a DSA segment.
Three new optional callbacks are added to PgStat_KindInfo, available to
variable-numbered stats kinds:
* to_serialized_data: writes auxiliary data for an entry.
* from_serialized_data: reads auxiliary data for an entry.
* finish: performs actions after read/write/discard operations. This is
invoked after processing all the entries of a kind, allowing extensions
to close file handles and clean up resources.
Stats kinds have the option to store this data in the existing pgstats
file, but can as well store it in one or more additional files whose
names can be built upon the entry keys. The new serialized callbacks
are called once an entry key is read or written from the main stats
file. A file descriptor to the main pgstats file is available in the
arguments of the callbacks.
Author: Sami Imseih <samimseih@gmail.com>
Co-authored-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/CAA5RZ0s9SDOu+Z6veoJCHWk+kDeTktAtC-KY9fQ9Z6BJdDUirQ@mail.gmail.com
The current list from the buildfarm includes quite a few typedef
names that it used to miss. The reason is a bit obscure, but it
seems likely to have something to do with our recent increased
use of palloc_object and palloc_array. In any case, this makes
the relevant struct declarations be much more nicely formatted,
so I'll take it. Install the current list and re-run pgindent
to update affected code.
Syncing with the current list also removes some obsolete
typedef names and fixes some alphabetization errors.
Discussion: https://postgr.es/m/1681301.1765742268@sss.pgh.pa.us
There doesn't seem to be any great reason why this has been a macro
rather than a typedef. But doing it like that means our buildfarm
typedef tooling doesn't capture the name as a typedef. That would
result in pgindent glitches, except that we've seemingly kept it
in typedefs.list manually. That's obviously error-prone, so let's
convert it to a typedef now.
Discussion: https://postgr.es/m/1681301.1765742268@sss.pgh.pa.us
While commit 5b275a6e1 fixed a few unhappy buildfarm animals,
it looks like the remainder simply don't have any es_ES locale
at all. There's little point in running the test in that case,
so minimize the number of variant expected-files by bailing out.
Also emit a log entry so that it's possible to tell from buildfarm
postmaster logs which case occurred.
Possibly, the scope of this testing could be improved by providing
additional translations. But I think it's likely that the failing
animals have no non-C locales installed at all.
In passing, update typedefs.list so that koel doesn't think
regress.c is misformatted.
Discussion: https://postgr.es/m/E1vUpNU-000kcQ-1D@gemulon.postgresql.org
The private refcount entry for a buffer is often looked up repeatedly for the
same buffer, e.g. to pin and then unpin a buffer. Benchmarking shows that it's
worthwhile to have a one-entry cache for that case. With that cache in place,
it's worth splitting GetPrivateRefCountEntry() into a small inline
portion (for the cache hit case) and an out-of-line helper for the rest.
This is helpful for some workloads today, but becomes more important in an
upcoming patch that will utilize the private refcount infrastructure to also
store whether the buffer is currently locked, as that increases the rate of
lookups substantially.
Reviewed-by: Melanie Plageman <melanieplageman@gmail.com>
Discussion: https://postgr.es/m/6rgb2nvhyvnszz4ul3wfzlf5rheb2kkwrglthnna7qhe24onwr@vw27225tkyar
This makes lookups faster, due to allowing auto-vectorized lookups. It is also
beneficial for an upcoming patch, independent of auto-vectorization, as the
upcoming patch wants to track more information for each pinned buffer, making
the existing loop, iterating over an array of PrivateRefCountEntry, more
expensive due to increasing its size.
Reviewed-by: Melanie Plageman <melanieplageman@gmail.com>
Discussion: https://postgr.es/m/fvfmkr5kk4nyex56ejgxj3uzi63isfxovp2biecb4bspbjrze7@az2pljabhnff
While CI testing in advance of commit 8c498479d suggested that all
Unix-ish platforms would accept 'es_ES.UTF-8', the buildfarm has
a different opinion. Let's dynamically select something that works,
if possible.
Discussion: https://postgr.es/m/E1vUpNU-000kcQ-1D@gemulon.postgresql.org
Coverity complained about this, not without reason:
OldMultiXactReader *state = state = pg_malloc(sizeof(*state));
(I'm surprised this is even legal C ... why is "state" in-scope
in its initialization expression?)
While at it, convert to use our newly-preferred "pg_malloc_object"
macro instead of an explicit sizeof().
We've never actually had a formal test for this facility.
It seems worth adding one now, mainly because we are starting
to depend on gettext() being able to handle the PRI* macros
from <inttypes.h>, and it's not all that certain that that
works everywhere. So the test goes to a bit of effort to
check all the PRI* macros we are likely to use.
(This effort has indeed found one problem already, now fixed
in commit f8715ec86.)
Discussion: https://postgr.es/m/3098752.1765221796@sss.pgh.pa.us
Discussion: https://postgr.es/m/292844.1765315339@sss.pgh.pa.us
Change WAIT_LSN_TYPE_COUNT from an enum sentinel to a macro definition,
in a similar way to IOObject, IOContext, and BackendType enums. Remove
explicit enum value assignments well.
Author: Xuneng Zhou <xunengzhou@gmail.com>