The comment introduced by commit e7cb7ee14 was a bit too terse, which
could lead to extensions doing different things within the hook function
than we intend to allow. Extend the comment to explain what they can do
within the hook function.
Back-patch to all supported branches.
In passing, I rephrased a nearby comment that I recently added to the
back branches.
Reviewed by David Rowley and Andrei Lepikhov.
Discussion: https://postgr.es/m/CAPmGK15SBPA1nr3Aqsdm%2BYyS-ay0Ayo2BRYQ8_A2To9eLqwopQ%40mail.gmail.com
When a snapshot file fails to be read in ImportSnapshot(), it would
issue an ERROR as "invalid snapshot identifier" when opening a stream
for it in read-only mode. The error handling is improved to be more
talkative in failure cases:
- If a snapshot identifier uses incorrect characters, complain with the
same error as before this commit.
- If the snapshot file cannot be found in pg_snapshots/, complain with a
"snapshot \"foo\" does not exist" instead. This maps to the case where
AllocateFile() fails on ENOENT. Based on a suggestion from Andres
Freund.
- If AllocateFile() throws something else than ENOENT as errno, report
it with more details in %m instead, as these failures are never
expected.
b29504eeb489 was the first improvement take. The older error message
exists since bb446b689b that introduced snapshot imports. Two test
cases are added to cover the cases of an identifier with an incorrect
format and of a missing snapshot.
Author: Bharath Rupireddy
Reviewed-by: Andres Freund, Daniel Gustafsson, Michael Paquier
Discussion: https://postgr.es/m/CALj2ACWmr=3KdxDkm8h7Zn1XxBoF6hdzq8WQyMn2y1OL5RYFrg@mail.gmail.com
There are a couple of places in frontend code that could make use
of this simple binary heap implementation. This commit makes
binaryheap usable in frontend code, much like commit 26aaf97b68 did
for StringInfo. Like StringInfo, the header file is left in lib/
to reduce the likelihood of unnecessary breakage.
The frontend version of binaryheap exposes a void *-based API since
frontend code does not have access to the Datum definitions. This
seemed like a better approach than switching all existing uses to
void * or making the Datum definitions available to frontend code.
Reviewed-by: Tom Lane, Alvaro Herrera
Discussion: https://postgr.es/m/3612876.1689443232%40sss.pgh.pa.us
cursor_to_xmlschema() assumed that any Portal must have a tupDesc,
which is not so. Add a defensive check.
It's plausible that this mistake occurred because of the rather
poorly chosen name of the lookup function SPI_cursor_find(),
which in such cases is returning something that isn't very much
like a cursor. Add some documentation to try to forestall future
errors of the same ilk.
Report and patch by Boyu Yang (docs changes by me). Back-patch
to all supported branches.
Discussion: https://postgr.es/m/dd343010-c637-434c-a8cb-418f53bda3b8.yangboyu.yby@alibaba-inc.com
expandRecordVariable() failed to adjust the parse nesting structure
correctly when recursing to inspect an outer-level Var. This could
result in assertion failures or core dumps in corner cases.
Likewise, get_name_for_var_field() failed to adjust the deparse
namespace stack correctly when recursing to inspect an outer-level
Var. In this case the likely result was a "bogus varno" error
while deparsing a view.
Per bug #18077 from Jingzhou Fu. Back-patch to all supported
branches.
Richard Guo, with some adjustments by me
Discussion: https://postgr.es/m/18077-b9db97c6e0ab45d8@postgresql.org
When tracking IO timing for WAL, the duration is what we calculate
based on the start and end timestamps, it's not what the variable
contains. Rename the timestamp variable to end to better communicate
what it contains. Original patch by Krishnakumar with additional
hacking to fix another occurrence by me.
Author: Krishnakumar R <kksrcv001@gmail.com>
Discussion: https://postgr.es/m/CAPMWgZ9f9o8awrQpjo8oxnNQ=bMDVPx00NE0QcDzvHD_ZrdLPw@mail.gmail.com
This became safe after commit 4b4798e138. The smgrcreate() call will
now register the segment for syncing at the next checkpoint, so we
don't need to sync it here. If a checkpoint happens before the
creation is WAL-logged, the records will be replayed when starting
recovery from the checkpoint. If a checkpoint happens after the WAL
logging, the checkpoint will fsync() it.
In the passing, clarify a comment in smgrDoPendingSyncs().
Discussion: https://www.postgresql.org/message-id/6e5bbc08-cdfc-b2b3-9e23-1a914b9850a9%40iki.fi
Reviewed-by: Robert Haas
The majority of all filenames are quoted in user facing error and
log messages, but a few were still printed without quotes. While
these filenames do not risk causing any ambiguity as their format
is strict, quote them anyways to be consistent across all logs.
Also concatenate a message to keep it one line to make it easier
to grep for in the code.
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/080EEABE-6645-4A46-AB20-6285ADAC44FE@yesql.se
This reverts commit a0d87bcd9b, following a remark from Andres Frend
that the new error can be triggered with an incorrect SET TRANSACTION
SNAPSHOT command without being really helpful for the user as it uses
the internal file name.
Discussion: https://postgr.es/m/20230914020724.hlks7vunitvtbbz4@awork3.anarazel.de
Backpatch-through: 11
It's entirely possible for a logical slot to have a confirmed_flush LSN
higher than the last value saved on disk while not being marked as dirty.
Currently, it is not a major problem but a later patch adding support for
the upgrade of slots relies on that value being properly flushed to disk.
It can also help avoid processing the same transactions again in some
boundary cases after the clean shutdown and restart. Say, we process
some transactions for which we didn't send anything downstream (the
changes got filtered) but the confirm_flush LSN is updated due to
keepalives. As we don't flush the latest value of confirm_flush LSN, it
may lead to processing the same changes again without this patch.
The approach taken by this patch has been suggested by Ashutosh Bapat.
Author: Vignesh C, Julien Rouhaud, Kuroda Hayato
Reviewed-by: Amit Kapila, Dilip Kumar, Michael Paquier, Ashutosh Bapat, Peter Smith, Hou Zhijie
Discussion: http://postgr.es/m/CAA4eK1JzJagMmb_E8D4au=GYQkxox0AfNBm1FbP7sy7t4YWXPQ@mail.gmail.com
Discussion: http://postgr.es/m/TYAPR01MB58664C81887B3AF2EB6B16E3F5939@TYAPR01MB5866.jpnprd01.prod.outlook.com
When a snapshot file fails to be read in ImportSnapshot(), it would
issue an ERROR as "invalid snapshot identifier" when opening a stream
for it in read-only mode. This error message is reworded to be the same
as all the other messages used in this case on failure, which is useful
when debugging this area.
Thinko introduced by bb446b689b where snapshot imports have been
added. A backpatch down to 11 is done as this can improve any work
related to snapshot imports in older branches.
Author: Bharath Rupireddy
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/CALj2ACWmr=3KdxDkm8h7Zn1XxBoF6hdzq8WQyMn2y1OL5RYFrg@mail.gmail.com
Backpatch-through: 11
These code paths should not be reached normally, but if they are an
error with "(null)" as information for the collation provider would show
up if no locale is set, while we can assume that we are referring to
libc.
This refactors the code so as the provider is always reported even if no
locale is set. The name of the function where the error happens is
added, while on it, as it can be helpful for debugging.
Issue introduced by d87d548cd0, so backpatch down to 16.
Author: Michael Paquier, Ranier Vilela
Reviewed-by: Jeff Davis, Kyotaro Horiguchi
Discussion: https://postgr.es/m/7073610042fcf97e1bea2ce08b7e0214b5e11094.camel@j-davis.com
Backpatch-through: 16
Both 50e17ad28 and 29f45e299 mistakenly tried to record a plan dependency
on a function but mistakenly inverted the OidIsValid test. This meant
that we'd record a dependency only when the function's Oid was
InvalidOid. Clearly this was meant to *not* record the dependency in
that case.
50e17ad28 made this mistake first, then in v15 29f45e299 copied the same
mistake.
Reported-by: Tom Lane
Backpatch-through: 14, where 50e17ad28 first made this mistake
Discussion: https://postgr.es/m/2277537.1694301772@sss.pgh.pa.us
If an out-of-memory error was thrown at an unfortunate time,
ensure_record_cache_typmod_slot_exists() could leak memory and leave
behind a global state that produced an infinite loop on the next call.
Fix by merging RecordCacheArray and RecordIdentifierArray into a single
array. With only one allocation or re-allocation, there is no
intermediate state.
Back-patch to all supported releases.
Reported-by: "James Pang (chaolpan)" <chaolpan@cisco.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/PH0PR11MB519113E738814BDDA702EDADD6EFA%40PH0PR11MB5191.namprd11.prod.outlook.com
generation_counter includes time spent on both JIT:ing expressions
and tuple deforming which are configured independently via options
jit_expressions and jit_tuple_deforming. As they are combined in
the same counter it's not apparent what fraction of time the tuple
deforming takes.
This adds deform_counter dedicated to tuple deforming, which allows
seeing more directly the influence jit_tuple_deforming is having on
the query. The counter is exposed in EXPLAIN and pg_stat_statements
bumpin pg_stat_statements to 1.11.
Author: Dmitry Dolgov <9erthalion6@gmail.com>
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/20220612091253.eegstkufdsu4kfls@erthalion.local
The WAIT_USE_WIN32 implementation of WaitEventSetWait() previously
reported at most one event per call, because that's what the underlying
WaitForMultipleObjects() call does.
We can make the behavior match the three Unix implementations by looping
until our output buffer is full, or there are no more events available
now. This makes no difference to most callers including the regular
FEBE socket code, since they ask for at most one event anyway. A
difference in socket accept priority might be perceived by end users
after commit 7389aad6 started using WaitEventSet in the postmaster.
With this commit, the accept order now matches Unix systems, servicing
listening sockets in round-robin order.
We decided it wasn't really a bug or worth back-patching, but it seems
good to align the behavior across platforms.
Reviewed-by: Andres Freund <andres@anarazel.de> (earlier version)
Tested-by: "Wei Wang (Fujitsu)" <wangw.fnst@fujitsu.com>
Discussion: https://postgr.es/m/CA%2BhUKG%2BA2dk29hr5zRP3HVJQ-_PncNJM6HVQ7aaYLXLRBZU-xw%40mail.gmail.com
Now that ATExecDropConstraint doesn't recurse anymore, so it's wrong to
test privileges "during recursion" there. Move the check to
dropconstraint_internal, which is the place where recursion occurs.
In passing, remove now-useless 'recursing' argument to
ATExecDropConstraint.
Discussion: https://postgr.es/m/202309051744.y4mndw5gwzhh@alvherre.pgsql
We shouldn't be doing non-trivial work in signal handlers in general,
and in this case the handler could reach unsafe code and corrupt state.
It also clobbered its own "reason" code.
Move all recovery conflict decision logic into the next
CHECK_FOR_INTERRUPTS(), and have the signal handler just set flags and
the latch, following the standard pattern. Since there are several
different "reasons", use a separate flag for each.
With this refactoring, the recovery conflict system no longer
piggy-backs on top of the regular query cancelation mechanism, but
instead raises an error directly if it decides that is necessary. It
still needs to respect QueryCancelHoldoffCount, because otherwise the
FEBE protocol might get out of sync (see commit 2b3a8b20c2).
This fixes one class of intermittent failure in the new
031_recovery_conflict.pl test added by commit 9f8a050f, though the buggy
coding is much older. Failures outside contrived testing seem to be
very rare (or perhaps incorrectly attributed) in the field, based on
lack of reports.
No back-patch for now due to complexity and release schedule. We have
the option to back-patch into 16 later, as 16 has prerequisite commit
bea3d7e.
Reviewed-by: Andres Freund <andres@anarazel.de> (earlier version)
Reviewed-by: Michael Paquier <michael@paquier.xyz> (earlier version)
Reviewed-by: Robert Haas <robertmhaas@gmail.com> (earlier version)
Tested-by: Christoph Berg <myon@debian.org>
Discussion: https://postgr.es/m/CA%2BhUKGK3PGKwcKqzoosamn36YW-fsuTdOPPF1i_rtEO%3DnEYKSg%40mail.gmail.com
Discussion: https://postgr.es/m/CALj2ACVr8au2J_9D88UfRCi0JdWhyQDDxAcSVav0B0irx9nXEg%40mail.gmail.com
This commit renames RecoveryInitSyncMethod to DataDirSyncMethod and
moves it to common/file_utils.h. This is preparatory work for a
follow-up commit that will allow specifying the synchronization
method in frontend utilities such as pg_upgrade and pg_basebackup.
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/ZN2ZB4afQ2JbR9TA%40paquier.xyz
This file is now made of two columns, removing the column listing the
user-visible strings used in the system views and the documentation:
- Enum definitions for each class without the prefix "WAIT_EVENT_", so
as this information can be grepped in the code and wait_event_names.txt
at the same time.
- Description in the documentation.
The wait event names are now generated from the enum objects in
CamelCase, with the underscores removed. The data generated for wait
events is consistent with what was produced by 414f6c0fb7.
This has the advantage to remove WAIT_EVENT_DOCONLY, which was a
placeholder for the wait event types Lock and LWLock as these two only
require the generation of the documentation.
Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/ZOxVHQwEC/9X/p/z@paquier.xyz
The event names use the same case-insensitive characters, hence applying
lower() or upper() to the monitoring queries allows the detection of the
same events as before this change. It is possible to cross-check the
data with the system view pg_wait_events, for instance, with a query
like that showing no differences:
SELECT lower(type), lower(name), description
FROM pg_wait_events ORDER BY 1, 2;
This will help in the introduction of more simplifications in the format
of wait_event_names. Some of the enum values in the code had to be
renamed a bit to follow the same convention naming across the board.
Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/ZOxVHQwEC/9X/p/z@paquier.xyz
Presently, frontend code that needs to use these macros must either
include storage/fd.h, which declares several frontend-unsafe
functions, or duplicate the macros. This commit moves these macros
to common/file_utils.h, which is safe for both frontend and backend
code. Consequently, we can also remove the duplicated macros in
pg_checksums and stop including storage/fd.h in pg_rewind.
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/ZOP5qoUualu5xl2Z%40paquier.xyz
This lock was introduced before memory barrier support was added,
and it is only used to guarantee proper memory ordering when
KnownAssignedXidsAdd() appends to the array without a lock. Now
that such memory barrier support exists, we can remove the lock and
use barriers instead.
Suggested-by: Tom Lane
Author: Michail Nikolaev
Reviewed-by: Robert Haas
Discussion: https://postgr.es/m/CANtu0oh0si%3DjG5z_fLeFtmYcETssQ08kLEa8b6TQqDm_cinroA%40mail.gmail.com
If all the bits of a key in a tsvector are true (marked with ALLISTRUE),
gtsvectorout() would show the following description:
"0 true bits, 0 false bits"
This is confusing, as all the bits are true, but this would be
equivalent to the information if siglen is 0.
This commit improves the output so as "all true bits" show instead in
this case. Alexander has proposed a regression test for pageinspect,
not included here as it is rather expensive compared to its coverage
value.
Author: Alexander Lakhin
Discussion: https://postgr.es/m/17950-6c80a8d2b94ec695@postgresql.org
This could lead to an imprecise choice when splitting an index page of a
GiST index on a tsvector, deciding which entries should remain on the
old page and which entries should move to a new page.
This is wrong since tsearch2 has been moved into core with commit
140d4ebcb4, so backpatch all the way down. This error has been
spotted by valgrind.
Author: Alexander Lakhin
Discussion: https://postgr.es/m/17950-6c80a8d2b94ec695@postgresql.org
Backpatch-through: 11
Dropping a database while a connection is attempted on it was able to
lead to the presence of valid database entries in shared statistics.
The issue is that MyDatabaseId was getting set too early than it should,
as, if the connection attempted on the dropped database fails when
renamed or dropped, the shutdown callback of the shared statistics would
finish by re-inserting a correct entry related to the database already
dropped.
As analyzed by the bug reporters, this issue could lead to phantom
entries in the database list maintained by the autovacuum launcher
(in rebuild_database_list()) if the database dropped was part of the
database list when it was still valid. After the database was dropped,
it would remain the highest on the list of databases to considered by
the autovacuum worker as things to process. This would prevent
autovacuum jobs to happen on all the other databases still present.
The commit fixes this issue by delaying setting MyDatabaseId until the
database existence has been re-checked with the second scan on
pg_database after getting a shared lock on it, and by switching
pgstat_update_dbstats() so as nothing happens if MyDatabaseId is not
valid.
Issue introduced by 5891c7a8ed, so backpatch down to 15.
Reported-by: Will Mortensen, Jacob Speidel
Analyzed-by: Will Mortensen, Jacob Speidel
Author: Andres Freund
Discussion: https://postgr.es/m/17973-bca1f7d5c14f601e@postgresql.org
Backpatch-through: 15
When a partitioned table has a primary key, trying to find the
corresponding not-null constraint for that column would come up empty,
causing code that's trying to check said not-null constraint to crash.
Fix by only running the check when the not-null constraint exists.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/d57b4a69-7394-3146-5976-9a1ef27e7972@gmail.com
The comment in heapgettup_advance_block() says that it reports the
scan position before checking for end of scan, but that didn't match
the code. The code was refactored in commit 7ae0ab0ad9, which
inadvertently changed the order of the check and reporting. Change it
back.
This caused a few regression test failures with a small shared_buffers
setting like 10 MB. The 'portals' and 'cluster' tests perform seqscans
that are large enough that sync seqscans kick in. When the sync scan
position is not updated at end of scan, the next seq scan doesn't
start at the beginning of the table, and the test queries are
sensitive to that.
Reviewed-by: Melanie Plageman, David Rowley
Discussion: https://www.postgresql.org/message-id/6f991389-ae22-d844-a9d8-9aceb7c01a9a@iki.fi
Backpatch-through: 16
There was some confusion in the ObjectProperty entry for transforms.
Some fields had values that were apparently meant for a different
field. Also, some fields were not assigned, which is okay for most
fields, but not for all. In particular, for .oid_catcache_id,
.name_catcache_id, and .objtype, zero is a valid value, so we need to
use -1 if not applicable. It has apparently been like that from the
very beginning (commit cac7658205). The faulty values were not
actually reachable, so it's not a big problem in practice, but we
should make it correct.
Discussion: https://www.postgresql.org/message-id/flat/75ae5875-3abc-dafc-8aec-73247ed41cde@eisentraut.org
The code is able to compile already without warnings under
-Wshadow=compatible-local, which is itself already enabled in the tree,
and the ones fixed here showed up with the more restrictive -Wshadow.
There are more of these that we may want to look at, and the ones fixed
here made the code confusing.
Author: Peter Smith
Discussion: https://postgr.es/m/CAHut+PuR0y4ofNOxi691VTVWmBfScHV9AaBMGSpeh8+DKp81Nw@mail.gmail.com
Unlike the other pg_stat_get_backend* functions,
pg_stat_get_backend_subxact() looks up the backend entry by using
its integer argument as a 1-based index in an internal array. The
other functions look for the entry with the matching session
backend ID. These numbers often match, but that isn't reliably
true.
This commit resolves this discrepancy by introducing
pgstat_get_local_beentry_by_backend_id() and using it in
pg_stat_get_backend_subxact(). We cannot use
pgstat_get_beentry_by_backend_id() because it returns a
PgBackendStatus, which lacks the locally computed additions
available in LocalPgBackendStatus that are required by
pg_stat_get_backend_subxact().
Author: Ian Barwick
Reviewed-by: Sami Imseih, Michael Paquier, Robert Haas
Discussion: https://postgr.es/m/CAB8KJ%3Dj-ACb3H4L9a_b3ZG3iCYDW5aEu3WsPAzkm2S7JzS1Few%40mail.gmail.com
Backpatch-through: 16
Presently, pgstat_fetch_stat_beentry() accepts a session's backend
ID as its argument, and pgstat_fetch_stat_local_beentry() accepts a
1-based index in an internal array as its argument. The former is
typically used wherever a user must provide a backend ID, and the
latter is usually used internally when looping over all entries in
the array. This difference was first introduced by d7e39d72ca.
Before that commit, both functions accepted a 1-based index to the
internal array.
This commit renames these two functions to make it clear whether
they use the backend ID or the 1-based index to look up the entry.
This is preparatory work for a follow-up change that will introduce
a function for looking up a LocalPgBackendStatus using a backend
ID.
Reviewed-by: Ian Barwick, Sami Imseih, Michael Paquier, Robert Haas
Discussion: https://postgr.es/m/CAB8KJ%3Dj-ACb3H4L9a_b3ZG3iCYDW5aEu3WsPAzkm2S7JzS1Few%40mail.gmail.com
Backpatch-through: 16