1
0
mirror of https://github.com/postgres/postgres.git synced 2025-11-22 12:22:45 +03:00
Commit Graph

8396 Commits

Author SHA1 Message Date
Michael Paquier
4800a5dfb4 Refactor InitPostgres() to use bitwise option flags
InitPostgres() has been using a set of boolean arguments to control its
behavior, and a patch under discussion was aiming at expanding it with a
third one.  In preparation for expanding this area, this commit switches
all the current boolean arguments of this routine to a single bits32
argument instead.  Two values are currently supported for the flags:
- INIT_PG_LOAD_SESSION_LIBS to load [session|local]_preload_libraries at
startup.
- INIT_PG_OVERRIDE_ALLOW_CONNS to allow connection to a database even if
it has !datallowconn.  This is used by bgworkers.

Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/ZSTn66_BXRZCeaqS@paquier.xyz
2023-10-11 12:31:49 +09:00
Peter Eisentraut
1d91d24d9a Add const to values and nulls arguments
This excludes any changes that would change the external AM APIs.

Reviewed-by: Aleksander Alekseev <aleksander@timescale.com>
Discussion: https://www.postgresql.org/message-id/flat/14c31f4a-0347-0805-dce8-93a9072c05a5%40eisentraut.org
2023-10-10 07:50:43 +02:00
David Rowley
4f3b56eea2 Revert "Optimize various aggregate deserialization functions"
This reverts commit 608fd198de.

On 2nd thoughts, the StringInfo API requires that strings are NUL
terminated and pointing directly to the data in a bytea Datum isn't NUL
terminated.

Discussion: https://postgr.es/m/CAApHDvorfO3iBZ=xpiZvp3uHtJVLyFaPBSvcAhAq2HPLnaNSwQ@mail.gmail.com
2023-10-10 14:16:54 +13:00
David Rowley
608fd198de Optimize various aggregate deserialization functions
The serialized representation of an internal aggregate state is a bytea
value.  In each deserial function, in order to "receive" the bytea value
we appended it onto a short-lived StringInfoData using
appendBinaryStringInfo.  This was a little wasteful as it meant having to
palloc memory, copy a (possibly long) series of bytes then later pfree
that memory.  Instead of going to this extra trouble, we can just fake up
a StringInfoData and point the data directly at the bytea's payload.  This
should help increase the performance of internal aggregate
deserialization.

Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/CAApHDvr=e-YOigriSHHm324a40HPqcUhSp6pWWgjz5WwegR=cQ@mail.gmail.com
2023-10-09 17:25:16 +13:00
Alvaro Herrera
1c99cde2f3 Improve JsonLexContext's freeability
Previously, the JSON code didn't have to worry too much about freeing
JsonLexContext, because it was never too long-lived.  With new features
being added for SQL/JSON this is no longer the case.  Add a routine
that knows how to free this struct and apply that to a few places, to
prevent this from becoming problematic.

At the same time, we change the API of makeJsonLexContextCstringLen to
make it receive a pointer to JsonLexContext for callers that want it to
be stack-allocated; it can also be passed as NULL to get the original
behavior of a palloc'ed one.

This also causes an ABI break due to the addition of flags to
JsonLexContext, so we can't easily backpatch it.  AFAICS that's not much
of a problem; apparently some leaks might exist in JSON usage of
text-search, for example via json_to_tsvector, but I haven't seen any
complaints about that.

Per Coverity complaint about datum_to_jsonb_internal().

Discussion: https://postgr.es/m/20230808174110.oq3iymllsv6amkih@alvherre.pgsql
2023-10-05 10:59:08 +02:00
Peter Eisentraut
5e4282772a Remove RelationGetIndexRawAttOptions()
There was only one caller left, for which this function was overkill.

Also, having it in relcache.c was inappropriate, since it doesn't work
with the relcache at all.

Discussion: https://www.postgresql.org/message-id/flat/f84640e3-00d3-5abd-3f41-e6a19d33c40b@eisentraut.org
2023-10-03 17:51:02 +02:00
Michael Paquier
a956bd3fa9 Avoid memory size overflow when allocating backend activity buffer
The code in charge of copying the contents of PgBackendStatus to local
memory could fail on memory allocation because of an overflow on the
amount of memory to use.  The overflow can happen when combining a high
value track_activity_query_size (max at 1MB) with a large
max_connections, when both multiplied get higher than INT32_MAX as both
parameters treated as signed integers.  This could for example trigger
with the following functions, all calling pgstat_read_current_status():
- pg_stat_get_backend_subxact()
- pg_stat_get_backend_idset()
- pg_stat_get_progress_info()
- pg_stat_get_activity()
- pg_stat_get_db_numbackends()

The change to use MemoryContextAllocHuge() has been introduced in
8d0ddccec6, so backpatch down to 12.

Author: Jakub Wartak
Discussion: https://postgr.es/m/CAKZiRmw8QSNVw2qNK-dznsatQqz+9DkCquxP0GHbbv1jMkGHMA@mail.gmail.com
Backpatch-through: 12
2023-10-03 15:37:00 +09:00
David Rowley
2075ba9dc9 Tidy-up some appendStringInfo*() usages
Make a few newish calls to appendStringInfo() which have no special
formatting use appendStringInfoString() instead.  Also, adjust usages of
appendStringInfoString() which only append a string containing a single
character to make use of appendStringInfoChar() instead.

This makes the code marginally faster, but primarily this change is so
we use the StringInfo type as it was intended to be used.

Discussion: https://postgr.es/m/CAApHDvpXKQmL+r=VDNS98upqhr9yGBhv2Jw3GBFFk_wKHcB39A@mail.gmail.com
2023-10-03 17:09:52 +13:00
Tom Lane
5b7b382464 Fix datalen calculation in tsvectorrecv().
After receiving position data for a lexeme, tsvectorrecv()
advanced its "datalen" value by (npos+1)*sizeof(WordEntry)
where the correct calculation is (npos+1)*sizeof(WordEntryPos).
This accidentally failed to render the constructed tsvector
invalid, but it did result in leaving some wasted space
approximately equal to the space consumed by the position data.
That could have several bad effects:

* Disk space is wasted if the received tsvector is stored into a
  table as-is.

* A legal tsvector could get rejected with "maximum total lexeme
  length exceeded" if the extra space pushes it over the MAXSTRPOS
  limit.

* In edge cases, the finished tsvector could be assigned a length
  larger than the allocated size of its palloc chunk, conceivably
  leading to SIGSEGV when the tsvector gets copied somewhere else.
  The odds of a field failure of this sort seem low, though valgrind
  testing could probably have found this.

While we're here, let's express the calculation as
"sizeof(uint16) + npos * sizeof(WordEntryPos)" to avoid the type
pun implicit in the "npos + 1" formulation.  It's not wrong
given that WordEntryPos had better be 2 bytes to avoid padding
problems, but it seems clearer this way.

Report and patch by Denis Erokhin.  Back-patch to all supported
versions.

Discussion: https://postgr.es/m/009801d9f2d9$f29730c0$d7c59240$@datagile.ru
2023-10-01 13:16:47 -04:00
Peter Eisentraut
64b787656d Add some const qualifiers
There was a mismatch between the const qualifiers for
excludeDirContents in src/backend/backup/basebackup.c and
src/bin/pg_rewind/filemap.c, which led to a quick search for similar
cases.  We should make excludeDirContents match, but the rest of the
changes seem like a good idea as well.

Author: David Steele <david@pgmasters.net>
Discussion: https://www.postgresql.org/message-id/flat/669a035c-d23d-2f38-7ff0-0cb93e01d610@pgmasters.net
2023-09-26 11:28:57 +01:00
Michael Paquier
e221c0befb Fix behavior of "force" in pgstat_report_wal()
As implemented in 5891c7a8ed, setting "force" to true in
pgstat_report_wal() causes the routine to not wait for the pgstat
shmem lock if it cannot be acquired, in which case the WAL and I/O
statistics finish by not being flushed.  The origin of the confusion
comes from pgstat_flush_wal() and pgstat_flush_io(), that use "nowait"
as sole argument.  The I/O stats are new in v16.

This is the opposite behavior of what has been used in
pgstat_report_stat(), where "force" is the opposite of "nowait".  In
this case, when "force" is true, the routine sets "nowait" to false,
which would cause the routine to wait for the pgstat shmem lock,
ensuring that the stats are always flushed.  When "force" is false,
"nowait" is set to true, and the stats would only not be flushed if the
pgstat shmem lock can be acquired, returning immediately without
flushing the stats if the lock cannot be acquired.

This commit changes pgstat_report_wal() so as "force" has the same
behavior as in pgstat_report_stat().  There are currently three callers
of pgstat_report_wal():
- Two in the checkpointer where force=true during a shutdown and the
main checkpointer loop.  Now the code behaves so as the stats are always
flushed.
- One in the main loop of the bgwriter, where force=false.  Now the code
behaves so as the stats would not be flushed if the pgstat shmem lock
could not be acquired.

Before this commit, some stats on WAL and I/O could have been lost after
a shutdown, for example.

Reported-by: Ryoga Yoshida
Author: Ryoga Yoshida, Michael Paquier
Discussion: https://postgr.es/m/f87a4d7be70530606b864fd1df91718c@oss.nttdata.com
Backpatch-through: 15
2023-09-26 09:29:47 +09:00
Tom Lane
3aff1d3fd0 Doc: improve cross-reference in Makefile comment.
Per gripe from Japin Li.

Discussion: https://postgr.es/m/MEYP282MB16692171F13B5DF40DB768EEB6FCA@MEYP282MB1669.AUSP282.PROD.OUTLOOK.COM
2023-09-25 11:25:19 -04:00
Daniel Gustafsson
c1609cf3c0 Fix typo in numutils.c comments
s/messges/messages/
2023-09-25 13:29:34 +02:00
Daniel Gustafsson
7750fefdb2 Add GUC for temporarily disabling event triggers
In order to troubleshoot misbehaving or buggy event triggers, the
documented advice is to enter single-user mode.  In an attempt to
reduce the number of situations where single-user mode is required
(or even recommended) for non-extraordinary maintenance, this GUC
allows to temporarily suspend event triggers.

This was originally extracted from a larger patchset which aimed
at supporting event triggers on login events.

Reviewed-by: Ted Yu <yuzhihong@gmail.com>
Reviewed-by: Mikhail Gribkov <youzhick@gmail.com>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/9140106E-F9BF-4D85-8FC8-F2D3C094A6D9@yesql.se
Discussion: https://postgr.es/m/0d46d29f-4558-3af9-9c85-7774e14a7709@postgrespro.ru
2023-09-25 12:41:49 +02:00
Peter Eisentraut
c5b0582841 Replace more MemSet calls with struct initialization
This fixes up 10ea0f924a to use the style introduced by 9fd45870c1.

Author: Richard Guo <guofenglinux@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/CAMbWs490gJf5A=ydqyjh+Z8mVQa_foTGtcmBtHGLra0aOwLWHQ@mail.gmail.com
2023-09-19 11:35:01 +02:00
Michael Paquier
78a33bba4c Improve error message for snapshot import in snapmgr.c, take two
When a snapshot file fails to be read in ImportSnapshot(), it would
issue an ERROR as "invalid snapshot identifier" when opening a stream
for it in read-only mode.  The error handling is improved to be more
talkative in failure cases:
- If a snapshot identifier uses incorrect characters, complain with the
same error as before this commit.
- If the snapshot file cannot be found in pg_snapshots/, complain with a
"snapshot \"foo\" does not exist" instead.  This maps to the case where
AllocateFile() fails on ENOENT.  Based on a suggestion from Andres
Freund.
- If AllocateFile() throws something else than ENOENT as errno, report
it with more details in %m instead, as these failures are never
expected.

b29504eeb489 was the first improvement take.  The older error message
exists since bb446b689b that introduced snapshot imports.  Two test
cases are added to cover the cases of an identifier with an incorrect
format and of a missing snapshot.

Author: Bharath Rupireddy
Reviewed-by: Andres Freund, Daniel Gustafsson, Michael Paquier
Discussion: https://postgr.es/m/CALj2ACWmr=3KdxDkm8h7Zn1XxBoF6hdzq8WQyMn2y1OL5RYFrg@mail.gmail.com
2023-09-19 10:19:50 +09:00
Tom Lane
f73fa5a470 Don't crash if cursor_to_xmlschema is used on a non-data-returning Portal.
cursor_to_xmlschema() assumed that any Portal must have a tupDesc,
which is not so.  Add a defensive check.

It's plausible that this mistake occurred because of the rather
poorly chosen name of the lookup function SPI_cursor_find(),
which in such cases is returning something that isn't very much
like a cursor.  Add some documentation to try to forestall future
errors of the same ilk.

Report and patch by Boyu Yang (docs changes by me).  Back-patch
to all supported branches.

Discussion: https://postgr.es/m/dd343010-c637-434c-a8cb-418f53bda3b8.yangboyu.yby@alibaba-inc.com
2023-09-18 14:28:17 -04:00
Tom Lane
e0e492e5a9 Track nesting depth correctly when drilling down into RECORD Vars.
expandRecordVariable() failed to adjust the parse nesting structure
correctly when recursing to inspect an outer-level Var.  This could
result in assertion failures or core dumps in corner cases.

Likewise, get_name_for_var_field() failed to adjust the deparse
namespace stack correctly when recursing to inspect an outer-level
Var.  In this case the likely result was a "bogus varno" error
while deparsing a view.

Per bug #18077 from Jingzhou Fu.  Back-patch to all supported
branches.

Richard Guo, with some adjustments by me

Discussion: https://postgr.es/m/18077-b9db97c6e0ab45d8@postgresql.org
2023-09-15 17:01:52 -04:00
Michael Paquier
be022908cf Revert "Improve error message on snapshot import in snapmgr.c"
This reverts commit a0d87bcd9b, following a remark from Andres Frend
that the new error can be triggered with an incorrect SET TRANSACTION
SNAPSHOT command without being really helpful for the user as it uses
the internal file name.

Discussion: https://postgr.es/m/20230914020724.hlks7vunitvtbbz4@awork3.anarazel.de
Backpatch-through: 11
2023-09-14 16:00:01 +09:00
Michael Paquier
a0d87bcd9b Improve error message on snapshot import in snapmgr.c
When a snapshot file fails to be read in ImportSnapshot(), it would
issue an ERROR as "invalid snapshot identifier" when opening a stream
for it in read-only mode.  This error message is reworded to be the same
as all the other messages used in this case on failure, which is useful
when debugging this area.

Thinko introduced by bb446b689b where snapshot imports have been
added.  A backpatch down to 11 is done as this can improve any work
related to snapshot imports in older branches.

Author: Bharath Rupireddy
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/CALj2ACWmr=3KdxDkm8h7Zn1XxBoF6hdzq8WQyMn2y1OL5RYFrg@mail.gmail.com
Backpatch-through: 11
2023-09-14 10:30:08 +09:00
Michael Paquier
b8f44a4779 Refactor error messages for unsupported providers in pg_locale.c
These code paths should not be reached normally, but if they are an
error with "(null)" as information for the collation provider would show
up if no locale is set, while we can assume that we are referring to
libc.

This refactors the code so as the provider is always reported even if no
locale is set.  The name of the function where the error happens is
added, while on it, as it can be helpful for debugging.

Issue introduced by d87d548cd0, so backpatch down to 16.

Author: Michael Paquier, Ranier Vilela
Reviewed-by: Jeff Davis, Kyotaro Horiguchi
Discussion: https://postgr.es/m/7073610042fcf97e1bea2ce08b7e0214b5e11094.camel@j-davis.com
Backpatch-through: 16
2023-09-14 08:35:02 +09:00
Thomas Munro
3acd0599bd Fix exception safety bug in typcache.c.
If an out-of-memory error was thrown at an unfortunate time,
ensure_record_cache_typmod_slot_exists() could leak memory and leave
behind a global state that produced an infinite loop on the next call.

Fix by merging RecordCacheArray and RecordIdentifierArray into a single
array.  With only one allocation or re-allocation, there is no
intermediate state.

Back-patch to all supported releases.

Reported-by: "James Pang (chaolpan)" <chaolpan@cisco.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/PH0PR11MB519113E738814BDDA702EDADD6EFA%40PH0PR11MB5191.namprd11.prod.outlook.com
2023-09-13 14:58:22 +12:00
Nathan Bossart
3ed1956719 Make enum for sync methods available to frontend code.
This commit renames RecoveryInitSyncMethod to DataDirSyncMethod and
moves it to common/file_utils.h.  This is preparatory work for a
follow-up commit that will allow specifying the synchronization
method in frontend utilities such as pg_upgrade and pg_basebackup.

Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/ZN2ZB4afQ2JbR9TA%40paquier.xyz
2023-09-06 16:26:39 -07:00
Michael Paquier
59cbf60c0f Remove column for wait event names in wait_event_names.txt
This file is now made of two columns, removing the column listing the
user-visible strings used in the system views and the documentation:
- Enum definitions for each class without the prefix "WAIT_EVENT_", so
as this information can be grepped in the code and wait_event_names.txt
at the same time.
- Description in the documentation.

The wait event names are now generated from the enum objects in
CamelCase, with the underscores removed.  The data generated for wait
events is consistent with what was produced by 414f6c0fb7.

This has the advantage to remove WAIT_EVENT_DOCONLY, which was a
placeholder for the wait event types Lock and LWLock as these two only
require the generation of the documentation.

Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/ZOxVHQwEC/9X/p/z@paquier.xyz
2023-09-06 10:27:02 +09:00
Michael Paquier
414f6c0fb7 Use more consistent names for wait event objects and types
The event names use the same case-insensitive characters, hence applying
lower() or upper() to the monitoring queries allows the detection of the
same events as before this change.  It is possible to cross-check the
data with the system view pg_wait_events, for instance, with a query
like that showing no differences:
SELECT lower(type), lower(name), description
  FROM pg_wait_events ORDER BY 1, 2;

This will help in the introduction of more simplifications in the format
of wait_event_names.  Some of the enum values in the code had to be
renamed a bit to follow the same convention naming across the board.

Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/ZOxVHQwEC/9X/p/z@paquier.xyz
2023-09-06 10:04:43 +09:00
Thomas Munro
f691f5b80a Remove the "snapshot too old" feature.
Remove the old_snapshot_threshold setting and mechanism for producing
the error "snapshot too old", originally added by commit 848ef42b.
Unfortunately it had a number of known problems in terms of correctness
and performance, mostly reported by Andres in the course of his work on
snapshot scalability.  We agreed to remove it, after a long period
without an active plan to fix it.

This is certainly a desirable feature, and someone might propose a new
or improved implementation in the future.

Reported-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CACG%3DezYV%2BEvO135fLRdVn-ZusfVsTY6cH1OZqWtezuEYH6ciQA%40mail.gmail.com
Discussion: https://postgr.es/m/20200401064008.qob7bfnnbu4w5cw4%40alap3.anarazel.de
Discussion: https://postgr.es/m/CA%2BTgmoY%3Daqf0zjTD%2B3dUWYkgMiNDegDLFjo%2B6ze%3DWtpik%2B3XqA%40mail.gmail.com
2023-09-05 19:53:43 +12:00
Michael Paquier
aa0d350456 Improve description of keys in tsvector
If all the bits of a key in a tsvector are true (marked with ALLISTRUE),
gtsvectorout() would show the following description:
"0 true bits, 0 false bits"

This is confusing, as all the bits are true, but this would be
equivalent to the information if siglen is 0.

This commit improves the output so as "all true bits" show instead in
this case.  Alexander has proposed a regression test for pageinspect,
not included here as it is rather expensive compared to its coverage
value.

Author: Alexander Lakhin
Discussion: https://postgr.es/m/17950-6c80a8d2b94ec695@postgresql.org
2023-09-05 13:57:44 +09:00
Michael Paquier
ae10dbb0c5 Fix out-of-bound read in gtsvector_picksplit()
This could lead to an imprecise choice when splitting an index page of a
GiST index on a tsvector, deciding which entries should remain on the
old page and which entries should move to a new page.

This is wrong since tsearch2 has been moved into core with commit
140d4ebcb4, so backpatch all the way down.  This error has been
spotted by valgrind.

Author: Alexander Lakhin
Discussion: https://postgr.es/m/17950-6c80a8d2b94ec695@postgresql.org
Backpatch-through: 11
2023-09-04 14:55:37 +09:00
Michael Paquier
2b8e5273e9 Fix handling of shared statistics with dropped databases
Dropping a database while a connection is attempted on it was able to
lead to the presence of valid database entries in shared statistics.
The issue is that MyDatabaseId was getting set too early than it should,
as, if the connection attempted on the dropped database fails when
renamed or dropped, the shutdown callback of the shared statistics would
finish by re-inserting a correct entry related to the database already
dropped.

As analyzed by the bug reporters, this issue could lead to phantom
entries in the database list maintained by the autovacuum launcher
(in rebuild_database_list()) if the database dropped was part of the
database list when it was still valid.  After the database was dropped,
it would remain the highest on the list of databases to considered by
the autovacuum worker as things to process.  This would prevent
autovacuum jobs to happen on all the other databases still present.

The commit fixes this issue by delaying setting MyDatabaseId until the
database existence has been re-checked with the second scan on
pg_database after getting a shared lock on it, and by switching
pgstat_update_dbstats() so as nothing happens if MyDatabaseId is not
valid.

Issue introduced by 5891c7a8ed, so backpatch down to 15.

Reported-by: Will Mortensen, Jacob Speidel
Analyzed-by: Will Mortensen, Jacob Speidel
Author: Andres Freund
Discussion: https://postgr.es/m/17973-bca1f7d5c14f601e@postgresql.org
Backpatch-through: 15
2023-09-04 08:04:22 +09:00
Nathan Bossart
d0fe3046ee Use actual backend IDs in pg_stat_get_backend_subxact().
Unlike the other pg_stat_get_backend* functions,
pg_stat_get_backend_subxact() looks up the backend entry by using
its integer argument as a 1-based index in an internal array.  The
other functions look for the entry with the matching session
backend ID.  These numbers often match, but that isn't reliably
true.

This commit resolves this discrepancy by introducing
pgstat_get_local_beentry_by_backend_id() and using it in
pg_stat_get_backend_subxact().  We cannot use
pgstat_get_beentry_by_backend_id() because it returns a
PgBackendStatus, which lacks the locally computed additions
available in LocalPgBackendStatus that are required by
pg_stat_get_backend_subxact().

Author: Ian Barwick
Reviewed-by: Sami Imseih, Michael Paquier, Robert Haas
Discussion: https://postgr.es/m/CAB8KJ%3Dj-ACb3H4L9a_b3ZG3iCYDW5aEu3WsPAzkm2S7JzS1Few%40mail.gmail.com
Backpatch-through: 16
2023-08-30 14:47:01 -07:00
Nathan Bossart
3d51cb5197 Rename some support functions for pgstat* views.
Presently, pgstat_fetch_stat_beentry() accepts a session's backend
ID as its argument, and pgstat_fetch_stat_local_beentry() accepts a
1-based index in an internal array as its argument.  The former is
typically used wherever a user must provide a backend ID, and the
latter is usually used internally when looping over all entries in
the array.  This difference was first introduced by d7e39d72ca.
Before that commit, both functions accepted a 1-based index to the
internal array.

This commit renames these two functions to make it clear whether
they use the backend ID or the 1-based index to look up the entry.
This is preparatory work for a follow-up change that will introduce
a function for looking up a LocalPgBackendStatus using a backend
ID.

Reviewed-by: Ian Barwick, Sami Imseih, Michael Paquier, Robert Haas
Discussion: https://postgr.es/m/CAB8KJ%3Dj-ACb3H4L9a_b3ZG3iCYDW5aEu3WsPAzkm2S7JzS1Few%40mail.gmail.com
Backpatch-through: 16
2023-08-30 14:46:52 -07:00
Michael Paquier
52c6c0f196 Avoid possible overflow with ltsGetFreeBlock() in logtape.c
nFreeBlocks, defined as a long, stores the number of free blocks in a
logical tape.  ltsGetFreeBlock() has been using an int to store the
value of nFreeBlocks, which could lead to overflows on platforms where
long and int are not the same size (in short everything except Windows
where long is 4 bytes).

The problematic intermediate variable is switched to be a long instead
of an int.

Issue introduced by c02fdc9223, so backpatch down to 13.

Author: Ranier vilela
Reviewed-by: Peter Geoghegan, David Rowley
Discussion: https://postgr.es/m/CAEudQApLDWCBR_xmwNjGBrDo+f+S4E87x3s7-+hoaKqYdtC4JQ@mail.gmail.com
Backpatch-through: 13
2023-08-30 08:03:42 +09:00
Peter Eisentraut
63956bed7b Rename logical_replication_mode to debug_logical_replication_streaming
The logical_replication_mode GUC is intended for testing and debugging
purposes, but its current name may be misleading and encourage users to make
unnecessary changes.

To avoid confusion, renaming the GUC to a less misleading name
debug_logical_replication_streaming that casual users are less likely to mistakenly
assume needs to be modified in a regular logical replication setup.

Author: Hou Zhijie <houzj.fnst@cn.fujitsu.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/d672d774-c44b-6fec-f993-793e744f169a%40eisentraut.org
2023-08-29 15:19:56 +02:00
Peter Eisentraut
36e4419d1f Make error messages about WAL segment size more consistent
Make the primary messages more compact and make the detail messages
uniform.  In initdb.c and pg_resetwal.c, use the newish
option_parse_int() to simplify some of the option parsing.  For the
backend GUC wal_segment_size, add a GUC check hook to do the
verification instead of coding it in bootstrap.c.  This might be
overkill, but that way the check is in the right place and it becomes
more self-documenting.

In passing, make pg_controldata use the logging API for warning
messages.

Reviewed-by: Aleksander Alekseev <aleksander@timescale.com>
Discussion: https://www.postgresql.org/message-id/flat/9939aa8a-d7be-da2c-7715-0a0b5535a1f7@eisentraut.org
2023-08-28 15:17:04 +02:00
Michael Paquier
bb9002257b Fix some typos in wait_event_names.txt
Noticed in passing, while hacking on a different patch touching this
area.
2023-08-28 17:09:12 +09:00
Michael Paquier
617f9b7d4b Tighten unit parsing in internal values
Interval values now generate an error when the user has multiple
consecutive units or a unit without a value.  Previously, it was
possible to specify multiple units consecutively which is contrary to
what the documentation allows, so it was possible to finish with
confusing interval values.

This is a follow-up of the work done in 165d581f14.

Author: Joseph Koshakow
Reviewed-by: Jacob Champion, Gurjeet Singh, Reid Thompson
Discussion: https://postgr.es/m/CAAvxfHd-yNO+XYnUxL=GaNZ1n+eE0V-oE0+-cC1jdjdU0KS3iw@mail.gmail.com
2023-08-28 14:27:17 +09:00
Michael Paquier
165d581f14 Tighten handling of "ago" in interval values
This commit Restrict the unit "ago" to only appear at the end of the
interval.  According to the documentation, a direction can only be
defined at the end of an interval, but it was possible to define it in
the middle of the string or define it multiple times.

In spirit, this is similar to the error handling improvements done in
5b3c595355 or bcc704b524.

Author: Joseph Koshakow
Reviewed-by: Jacob Champion, Gurjeet Singh, Reid Thompson
Discussion: https://postgr.es/m/CAAvxfHd-yNO+XYnUxL=GaNZ1n+eE0V-oE0+-cC1jdjdU0KS3iw@mail.gmail.com
2023-08-28 13:49:55 +09:00
Michael Paquier
d6d1430f40 Remove dead code in DecodeInterval()
This commit removes some dead code related to the unit type RESERV,
whose last use has been removed from the unit lookup table used for
intervals ("deltatktbl" in datetime.c) in 666cbae16d.  Before that,
RESERV was used as an equivalent of "invalid", but that's now
unreachable.

Author: Joseph Koshakow
Reviewed-by: Jacob Champion, Gurjeet Singh, Reid Thompson
Discussion: https://postgr.es/m/CAAvxfHd-yNO+XYnUxL=GaNZ1n+eE0V-oE0+-cC1jdjdU0KS3iw@mail.gmail.com
2023-08-28 12:53:41 +09:00
Alvaro Herrera
b0e96f3119 Catalog not-null constraints
We now create contype='n' pg_constraint rows for not-null constraints.

We propagate these constraints to other tables during operations such as
adding inheritance relationships, creating and attaching partitions and
creating tables LIKE other tables.  We also spawn not-null constraints
for inheritance child tables when their parents have primary keys.
These related constraints mostly follow the well-known rules of
conislocal and coninhcount that we have for CHECK constraints, with some
adaptations: for example, as opposed to CHECK constraints, we don't
match not-null ones by name when descending a hierarchy to alter it,
instead matching by column name that they apply to.  This means we don't
require the constraint names to be identical across a hierarchy.

For now, we omit them for system catalogs.  Maybe this is worth
reconsidering.  We don't support NOT VALID nor DEFERRABLE clauses
either; these can be added as separate features later (this patch is
already large and complicated enough.)

psql shows these constraints in \d+.

pg_dump requires some ad-hoc hacks, particularly when dumping a primary
key.  We now create one "throwaway" not-null constraint for each column
in the PK together with the CREATE TABLE command, and once the PK is
created, all those throwaway constraints are removed.  This avoids
having to check each tuple for nullness when the dump restores the
primary key creation.

pg_upgrading from an older release requires a somewhat brittle procedure
to create a constraint state that matches what would be created if the
database were being created fresh in Postgres 17.  I have tested all the
scenarios I could think of, and it works correctly as far as I can tell,
but I could have neglected weird cases.

This patch has been very long in the making.  The first patch was
written by Bernd Helmle in 2010 to add a new pg_constraint.contype value
('n'), which I (Álvaro) then hijacked in 2011 and 2012, until that one
was killed by the realization that we ought to use contype='c' instead:
manufactured CHECK constraints.  However, later SQL standard
development, as well as nonobvious emergent properties of that design
(mostly, failure to distinguish them from "normal" CHECK constraints as
well as the performance implication of having to test the CHECK
expression) led us to reconsider this choice, so now the current
implementation uses contype='n' again.  During Postgres 16 this had
already been introduced by commit e056c557ae, but there were some
problems mainly with the pg_upgrade procedure that couldn't be fixed in
reasonable time, so it was reverted.

In 2016 Vitaly Burovoy also worked on this feature[1] but found no
consensus for his proposed approach, which was claimed to be closer to
the letter of the standard, requiring an additional pg_attribute column
to track the OID of the not-null constraint for that column.
[1] https://postgr.es/m/CAKOSWNkN6HSyatuys8xZxzRCR-KL1OkHS5-b9qd9bf1Rad3PLA@mail.gmail.com

Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Author: Bernd Helmle <mailings@oopsware.de>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
2023-08-25 13:31:24 +02:00
Tom Lane
d8b2fcc9d4 Avoid unnecessary plancache revalidation of utility statements.
Revalidation of a plancache entry (after a cache invalidation event)
requires acquiring a snapshot.  Normally that is harmless, but not
if the cached statement is one that needs to run without acquiring a
snapshot.  We were already aware of that for TransactionStmts,
but for some reason hadn't extrapolated to the other statements that
PlannedStmtRequiresSnapshot() knows mustn't set a snapshot.  This can
lead to unexpected failures of commands such as SET TRANSACTION
ISOLATION LEVEL.  We can fix it in the same way, by excluding those
command types from revalidation.

However, we can do even better than that: there is no need to
revalidate for any statement type for which parse analysis, rewrite,
and plan steps do nothing interesting, which is nearly all utility
commands.  To mechanize this, invent a parser function
stmt_requires_parse_analysis() that tells whether parse analysis does
anything beyond wrapping a CMD_UTILITY Query around the raw parse
tree.  If that's what it does, then rewrite and plan will just
skip the Query, so that it is not possible for the same raw parse
tree to produce a different plan tree after cache invalidation.

stmt_requires_parse_analysis() is basically equivalent to the
existing function analyze_requires_snapshot(), except that for
obscure reasons that function omits ReturnStmt and CallStmt.
It is unclear whether those were oversights or intentional.
I have not been able to demonstrate a bug from not acquiring a
snapshot while analyzing these commands, but at best it seems mighty
fragile.  It seems safer to acquire a snapshot for parse analysis of
these commands too, which allows making stmt_requires_parse_analysis
and analyze_requires_snapshot equivalent.

In passing this fixes a second bug, which is that ResetPlanCache
would exclude ReturnStmts and CallStmts from revalidation.
That's surely *not* safe, since they contain parsable expressions.

Per bug #18059 from Pavel Kulakov.  Back-patch to all supported
branches.

Discussion: https://postgr.es/m/18059-79c692f036b25346@postgresql.org
2023-08-24 12:02:46 -04:00
Peter Eisentraut
4f3514f201 Rename hook functions for debug_io_direct to match variable name.
Commit 319bae9a renamed the GUC.  Rename the check and assign functions
to match, and alphabetize.

Back-patch to 16.

Author: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/2769341e-fa28-c2ee-3e4b-53fdcaaf2271%40eisentraut.org
2023-08-24 22:25:49 +12:00
Nathan Bossart
260a1f18da Add to_bin() and to_oct().
This commit introduces functions for converting numbers to their
equivalent binary and octal representations.  Also, the base
conversion code for these functions and to_hex() has been moved to
a common helper function.

Co-authored-by: Eric Radman
Reviewed-by: Ian Barwick, Dag Lem, Vignesh C, Tom Lane, Peter Eisentraut, Kirk Wolak, Vik Fearing, John Naylor, Dean Rasheed
Discussion: https://postgr.es/m/Y6IyTQQ/TsD5wnsH%40vm3.eradman.com
2023-08-23 07:49:03 -07:00
Nathan Bossart
f4b54e1ed9 Introduce macros for protocol characters.
This commit introduces descriptively-named macros for the
identifiers used in wire protocol messages.  These new macros are
placed in a new header file so that they can be easily used by
third-party code.

Author: Dave Cramer
Reviewed-by: Alvaro Herrera, Tatsuo Ishii, Peter Smith, Robert Haas, Tom Lane, Peter Eisentraut, Michael Paquier
Discussion: https://postgr.es/m/CADK3HHKbBmK-PKf1bPNFoMC%2BoBt%2BpD9PH8h5nvmBQskEHm-Ehw%40mail.gmail.com
2023-08-22 19:16:12 -07:00
Michael Paquier
6fde2d9a00 Fix pg_stat_reset_single_table_counters() for shared relations
This commit fixes the function of $subject for shared relations.  This
feature has been added by e042678.  Unfortunately, this new behavior got
removed by 5891c7a when moving statistics to shared memory.

Reported-by: Mitsuru Hinata
Author: Masahiro Ikeda
Reviewed-by: Kyotaro Horiguchi, Masahiko Sawada
Discussion: https://postgr.es/m/7cc69f863d9b1bc677544e3accd0e4b4@oss.nttdata.com
Backpatch-through: 15
2023-08-21 13:32:14 +09:00
Michael Paquier
1e68e43d3f Add system view pg_wait_events
This new view, wrapped around a SRF, shows some information known about
wait events, as of:
- Name.
- Type (Activity, I/O, Extension, etc.).
- Description.

All the information retrieved comes from wait_event_names.txt, and the
description is the same as the documentation with filters applied to
remove any XML markups.  This view is useful when joined with
pg_stat_activity to get the description of a wait event reported.

Custom wait events for extensions are included in the view.

Original idea by Yves Colin.

Author: Bertrand Drouvot
Reviewed-by: Kyotaro Horiguchi, Masahiro Ikeda, Tom Lane, Michael
Paquier
Discussion: https://postgr.es/m/0e2ae164-dc89-03c3-cf7f-de86378053ac@gmail.com
2023-08-20 15:35:02 +09:00
Michael Paquier
00e49233a9 Fix format if entry in wait_event_names.txt
The entry LockManager had two successive whitespaces between two words.
This is not an actual bug, but let's be clean.  Thinko in fa88928.

Reported-by: Masahiro Ikeda
Author: Bertrand Drouvot
Discussion: https://postgr.es/m/dd836027-2e9e-4df9-9fd9-7527cd1757e1@gmail.com
2023-08-18 08:11:10 +09:00
Peter Eisentraut
dca20013eb Unify some error messages
We had essentially the same error in several different wordings.
Unify that.
2023-08-16 16:17:00 +02:00
John Naylor
c9bfa40914 Split out tiebreaker comparisons from comparetup_* functions
Previously, if a specialized comparator found equal datum1 keys,
the "comparetup" function would repeat the comparison on the
datum before proceeding with the unabbreviated first key
and/or additional sort keys.

Move comparing additional sort keys into "tiebreak" functions so
that specialized comparators can call these directly if needed,
avoiding duplicate work.

Reviewed by David Rowley

Discussion: https://postgr.es/m/CAFBsxsGaVfUrjTghpf%3DkDBYY%3DjWx1PN-fuusVe7Vw5s0XqGdGw%40mail.gmail.com
2023-08-16 17:15:07 +07:00
Michael Paquier
af720b4c50 Change custom wait events to use dynamic shared hash tables
Currently, the names of the custom wait event must be registered for
each backend, requiring all these to link to the shared memory area of
an extension, even if these are not loaded with
shared_preload_libraries.

This patch relaxes the constraints related to this infrastructure by
storing the wait events and their names in two dynamic hash tables in
shared memory.  This has the advantage to simplify the registration of
custom wait events to a single routine call that returns an event ID
ready for consumption:
uint32 WaitEventExtensionNew(const char *wait_event_name);

The caller of this routine can then cache locally the ID returned, to be
used for pgstat_report_wait_start(), WaitLatch() or a similar routine.

The implementation uses two hash tables: one with a key based on the
event name to avoid duplicates and a second using the event ID as key
for event lookups, like on pg_stat_activity.  These tables can hold a
minimum of 16 entries, and a maximum of 128 entries, which should be plenty
enough.

The code changes done in worker_spi show how things are simplified (most
of the code removed in this commit comes from there):
- worker_spi_init() is gone.
- No more shared memory hooks required (size requested and
initialization).
- The custom wait event ID is cached in the process that needs to set
it, with one single call to WaitEventExtensionNew() to retrieve it.

Per suggestion from Andres Freund.

Author: Masahiro Ikeda, with a few tweaks from me.
Discussion: https://postgr.es/m/20230801032349.aaiuvhtrcvvcwzcx@awork3.anarazel.de
2023-08-14 14:47:27 +09:00
Michael Paquier
5dc456b7dd Fix code indentation violations introduced by recent commit
The two culprit commits are 5765cfe and 5e0c761.

Per buildfarm member koel for the first commit, while I have noticed the
second one in passing.
2023-08-11 20:43:34 +09:00