1
0
mirror of https://github.com/postgres/postgres.git synced 2025-05-29 16:21:20 +03:00

59362 Commits

Author SHA1 Message Date
Michael Paquier
7db9bfc1f7 Add missing query ID reporting in extended query protocol
This commit adds query ID reports for two code paths when processing
extended query protocol messages:
- When receiving a bind message, setting it to the first Query retrieved
from a cached cache.
- When receiving an execute message, setting it to the first PlannedStmt
stored in a portal.

An advantage of this method is that this is able to cover all the types
of portals handled in the extended query protocol, particularly these
two when the report done in ExecutorStart() is not enough (neither is an
addition in ExecutorRun(), actually, for the second point):
- Multiple execute messages, with multiple ExecutorRun().
- Portal with execute/fetch messages, like a query with a RETURNING
clause and a fetch size that stores the tuples in a first execute
message going though ExecutorStart() and ExecuteRun(), followed by one
or more execute messages doing only fetches from the tuplestore created
in the first message.  This corresponds to the case where
execute_is_fetch is set, for example.

Note that the query ID reporting done in ExecutorStart() is still
necessary, as an EXECUTE requires it.  Query ID reporting is optimistic
and more calls to pgstat_report_query_id() don't matter as the first
report takes priority except if the report is forced.  The comment in
ExecutorStart() is adjusted to reflect better the reality with the
extended query protocol.

The test added in pg_stat_statements is a courtesy of Robert Haas.  This
uses psql's \bind metacommand, hence this part is backpatched down to
v16.

Reported-by:  Kaido Vaikla, Erik Wienhold
Author: Sami Imseih
Reviewed-by: Jian He, Andrei Lepikhov, Michael Paquier
Discussion: https://postgr.es/m/CA+427g8DiW3aZ6pOpVgkPbqK97ouBdf18VLiHFesea2jUk3XoQ@mail.gmail.com
Discussion: https://postgr.es/m/CA+TgmoZxtnf_jZ=VqBSyaU8hfUkkwoJCJ6ufy4LGpXaunKrjrg@mail.gmail.com
Discussion: https://postgr.es/m/1391613709.939460.1684777418070@office.mailbox.org
Backpatch-through: 14
2024-09-18 09:59:14 +09:00
Thomas Munro
ec1d545c8f Allow ReadStream to be consumed as raw block numbers.
Commits 041b9680 and 6377e12a changed the interface of
scan_analyze_next_block() to take a ReadStream instead of a BlockNumber
and a BufferAccessStrategy, and to return a value to indicate when the
stream has run out of blocks.

This caused integration problems for at least one known extension that
uses specially encoded BlockNumber values that map to different
underlying storage, because acquire_sample_rows() sets up the stream so
that read_stream_next_buffer() reads blocks from the main fork of the
relation's SMgrRelation.

Provide read_stream_next_block(), as a way for such an extension to
access the stream of raw BlockNumbers directly and forward them to its
own ReadBuffer() calls after decoding, as it could in earlier releases.
The new function returns the BlockNumber and BufferAccessStrategy that
were previously passed directly to scan_analyze_next_block().
Alternatively, an extension could wrap the stream of BlockNumbers in
another ReadStream with a callback that performs any decoding required
to arrive at real storage manager BlockNumber values, so that it could
benefit from the I/O combining and concurrency provided by
read_stream.c.

Another class of table access method that does nothing in
scan_analyze_next_block() because it is not block-oriented could use
this function to control the number of block sampling loops.  It could
match the previous behavior with "return read_stream_next_block(stream,
&bas) != InvalidBlockNumber".

Ongoing work is expected to provide better ANALYZE support for table
access methods that don't behave like heapam with respect to storage
blocks, but that will be for future releases.

Back-patch to 17.

Reported-by: Mats Kindahl <mats@timescale.com>
Reviewed-by: Mats Kindahl <mats@timescale.com>
Discussion: https://postgr.es/m/CA%2B14425%2BCcm07ocG97Fp%2BFrD9xUXqmBKFvecp0p%2BgV2YYR258Q%40mail.gmail.com
2024-09-18 11:29:58 +12:00
Tom Lane
f7567f9e53 Repair pg_upgrade for identity sequences with non-default persistence.
Since we introduced unlogged sequences in v15, identity sequences
have defaulted to having the same persistence as their owning table.
However, it is possible to change that with ALTER SEQUENCE, and
pg_dump tries to preserve the logged-ness of sequences when it doesn't
match (as indeed it wouldn't for an unlogged table from before v15).

The fly in the ointment is that ALTER SEQUENCE SET [UN]LOGGED fails
in binary-upgrade mode, because it needs to assign a new relfilenode
which we cannot permit in that mode.  Thus, trying to pg_upgrade a
database containing a mismatching identity sequence failed.

To fix, add syntax to ADD/ALTER COLUMN GENERATED AS IDENTITY to allow
the sequence's persistence to be set correctly at creation, and use
that instead of ALTER SEQUENCE SET [UN]LOGGED in pg_dump.  (I tried to
make SET [UN]LOGGED work without any pg_dump modifications, but that
seems too fragile to be a desirable answer.  This way should be
markedly faster anyhow.)

In passing, document the previously-undocumented SEQUENCE NAME option
that pg_dump also relies on for identity sequences; I see no value
in trying to pretend it doesn't exist.

Per bug #18618 from Anthony Hsu.
Back-patch to v15 where we invented this stuff.

Discussion: https://postgr.es/m/18618-d4eb26d669ed110a@postgresql.org
2024-09-17 15:53:36 -04:00
Peter Geoghegan
a24bffc021 Avoid parallel nbtree index scan hangs with SAOPs.
Commit 5bf748b8, which enhanced nbtree ScalarArrayOp execution, made
parallel index scans work with the new design for arrays via explicit
scheduling of primitive index scans.  A backend that successfully
scheduled the scan's next primitive index scan saved its backend local
array keys in shared memory.  Any backend could pick up the scheduled
primitive scan within _bt_first.  This scheme decouples scheduling a
primitive scan from starting the scan (by performing another descent of
the index via a _bt_search call from _bt_first) to make things robust.

The scheme had a deadlock hazard, at least when the leader process
participated in the scan.  _bt_parallel_seize had a code path that made
backends that were not in an immediate position to start a scheduled
primitive index scan wait for some other backend to do so instead.
Under the right circumstances, the leader process could wait here
forever: the leader would wait for any other backend to start the
primitive scan, while every worker was busy waiting on the leader to
consume tuples from the scan's tuple queue.

To fix, don't wait for a scheduled primitive index scan to be started by
some other eligible backend from within _bt_parallel_seize (when the
calling backend isn't in a position to do so itself).  Return false
instead, while recording that the scan has a scheduled primitive index
scan in backend local state.  This leaves the backend in the same state
as the existing case where a backend schedules (or tries to schedule)
another primitive index scan from within _bt_advance_array_keys, before
calling _bt_parallel_seize.  _bt_parallel_seize already handles that
case by returning false without waiting, and without unsetting the
backend local state.  Leaving the backend in this state enables it to
start a previously scheduled primitive index scan once it gets back to
_bt_first.

Oversight in commit 5bf748b8, which enhanced nbtree ScalarArrayOp
execution.

Matthias van de Meent, with tweaks by me.

Author: Matthias van de Meent <boekewurm+postgres@gmail.com>
Reported-By: Tomas Vondra <tomas@vondra.me>
Reviewed-By: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-WzmMGaPa32u9x_FvEbPTUkP5e95i=QxR8054nvCRydP-sw@mail.gmail.com
Backpatch: 17-, where nbtree SAOP execution was enhanced.
2024-09-17 11:10:33 -04:00
Bruce Momjian
054a23b674 scripts: add Perl script to add links to release notes
Reported-by: jian he

Discussion: https://postgr.es/m/ZuYsS5XdA7hVcV9l@momjian.us

Backpatch-through: 12
2024-09-16 13:26:37 -04:00
Tom Lane
b9645dca10 Replace usages of xmlXPathCompile() with xmlXPathCtxtCompile().
In existing releases of libxml2, xmlXPathCompile can be driven
to stack overflow because it fails to protect itself against
too-deeply-nested input.  While there is an upstream fix as of
yesterday, it will take years for that to propagate into all
shipping versions.  In the meantime, we can protect our own
usages basically for free by calling xmlXPathCtxtCompile instead.

(The actual bug is that libxml2 keeps its nesting counter in the
xmlXPathContext, and its parsing code was willing to just skip
counting nesting levels if it didn't have a context.  So if we supply
a context, all is well.  It seems odd actually that it works at all
to not supply a context, because this means that XPath parsing does
not have access to XML namespace info.  Apparently libxml2 never
checks namespaces until runtime?  Anyway, this seems like good
future-proofing even if its only immediate effect is to dodge a bug.)

Sadly, this hack only offers protection with libxml2 2.9.11 and newer.
Before that there are multiple similar problems, so if you are
processing untrusted XML it behooves you to get a newer version.
But we have some pretty old libxml2 in the buildfarm, so it seems
impractical to add a regression test to verify this fix.

Per bug #18617 from Jingzhou Fu.  Back-patch to all supported
versions.

Discussion: https://postgr.es/m/18617-1cee4d2ed1f4e7ae@postgresql.org
Discussion: https://gitlab.gnome.org/GNOME/libxml2/-/issues/799
2024-09-15 13:33:09 -04:00
Tom Lane
6283ff2018 Run regression tests with timezone America/Los_Angeles.
Historically we've used timezone "PST8PDT", but the recent release
2024b of tzdb changes the definition of that zone in a way that
breaks many test cases concerned with dates before 1970.  Although
we've not yet adopted 2024b into our own tree, this is already
problematic for people using --with-system-tzdata if their platform
has already adopted 2024b.  To work with both older and newer
versions of tzdb, switch to using "America/Los_Angeles", accepting
the ensuing changes in regression test results.

Back-patch to all supported branches.

Per report and patch from Wolfgang Walther.

Discussion: https://postgr.es/m/0a997455-5aba-4cf2-a354-d26d8bcbfae6@technowledgy.de
2024-09-14 17:55:02 -04:00
Alvaro Herrera
fca91b5c90
Add commit 7229ebe011df to .git-blame-ignore-revs. 2024-09-14 20:17:30 +02:00
Bruce Momjian
77120dd081 doc PG 17 relnotes: adjust SGML format for commit add script
Backpatch-through: 17 only
2024-09-14 13:20:02 -04:00
Bruce Momjian
4a89807569 doc PG 17 relnotes: remove commit link whose comment was removed
Backpatch-through: 17 only
2024-09-14 12:49:47 -04:00
Tom Lane
e3b2c651bc Remove obsolete comment in pg_stat_statements.
Commit 76db9cb63 removed the use of multiple nesting counters,
but missed one comment describing that arrangement.

Back-patch to v17 where 76db9cb63 came in, just to avoid confusion.

Julien Rouhaud

Discussion: https://postgr.es/m/gfcwh3zjxc2vygltapgo7g6yacdor5s4ynr234b6v2ohhuvt7m@gr42joxalenw
2024-09-14 11:42:40 -04:00
Bruce Momjian
54497e0ed8 doc PG 17 relnotes: update to current
Backpatch-through: 17 only
2024-09-14 10:38:34 -04:00
Andrew Dunstan
dc2a660bd9 Improve meson's detection of perl build flags
The current method of detecting perl build flags breaks if the path to
perl contains a space. This change makes two improvements. First,
instead of getting a list of ldflags and ccdlflags and then trying to
filter those out of the reported ldopts, we tell perl to suppress
reporting those in the first instance. Second, it tells perl to parse
those and output them, one per line. Thus any space on the option in a
file name, for example, is preserved.

Issue reported off-list by Muralikrishna Bandaru

Discussion: https://postgr.es/01117f88-f465-bf6c-9362-083bd72ca305@dunslane.net

Backpatch to release 16.
2024-09-14 10:36:57 -04:00
Bruce Momjian
984702d0ee doc PG 17 relnotes: move EXPLAIN items to their own section
Reported-by: Álvaro Herrera

Discussion: https://postgr.es/m/202409111750.xtcbl7ppdoyn@alvherre.pgsql

Backpatch-through: 17 only
2024-09-14 09:27:21 -04:00
Andrew Dunstan
648397b1d4 Only define NO_THREAD_SAFE_LOCALE for MSVC plperl when required
Latest versions of Strawberry Perl define USE_THREAD_SAFE_LOCALE, and we
therefore get a handshake error when building against such instances.
The solution is to perform a test to see if USE_THREAD_SAFE_LOCALE is
defined and only define NO_THREAD_SAFE_LOCALE if it isn't.

Backpatch the meson.build fix back to release 16 and apply the same
logic to Mkvcbuild.pm in releases 12 through 16.

Original report of the issue from Muralikrishna Bandaru.
2024-09-14 08:48:04 -04:00
Bruce Momjian
4e7b9a1adc doc PG 17 relnotes: move partition access method item
Also clarify wording.

Reported-by: Álvaro Herrera

Discussion: https://postgr.es/m/202409111750.xtcbl7ppdoyn@alvherre.pgsql

Backpatch-through: 17 only
2024-09-13 18:21:56 -04:00
Bruce Momjian
26cea9fb82 doc PG 17 relnotes: add dynamic shared memory registry item
Reported-by: Nathan Bossart

Discussion: https://postgr.es/m/Ztcuwbs0FGCPOEu9@nathan

Backpatch-through: 17 only
2024-09-13 16:17:42 -04:00
Tom Lane
418c6a2c72 Allow _h_indexbuild() to be interrupted.
When we are building a hash index that is large enough to need
pre-sorting (larger than either maintenance_work_mem or NBuffers),
the initial sorting phase is interruptible, but the insertion
phase wasn't.  Add the missing CHECK_FOR_INTERRUPTS().

Per bug #18616 from Alexander Lakhin.  Back-patch to all
supported branches.

Pavel Borisov

Discussion: https://postgr.es/m/18616-acbb9e5caf41e964@postgresql.org
2024-09-13 16:17:04 -04:00
Bruce Momjian
b9a65a678b doc PG 17 relnotes: replace commit hashes with section marks
Backpatch-through: 17 only
2024-09-13 12:29:51 -04:00
Nathan Bossart
9b3c3c0fc2 Fix contrib/pageinspect's test for sequences.
I managed to break this test in two different ways in commit
05036a3155.

First, the output of the new call to tuple_data_split() on the test
sequence is dependent on endianness.  This is fixed by setting a
special start value for the test sequence that produces the same
output regardless of the endianness of the machine.

Second, on versions older than v15, the new test case fails under
"force_parallel_mode = regress" with the following error:

	ERROR:  cannot access temporary tables during a parallel operation

This is because pageinspect's disk-accessing functions are
incorrectly marked PARALLEL SAFE on versions older than v15 (see
commit aeaaf520f4 for details).  This one is fixed by changing the
test sequence to be permanent.  The only reason it was previously
marked temporary was to avoid needing a DROP SEQUENCE command at
the end of the test.  Unlike some other tests in this file, the use
of a permanent sequence here shouldn't result in any test
instability like what was fixed by commit e2933a6e11.

Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/ZuOKOut5hhDlf_bP%40nathan
Backpatch-through: 12
2024-09-13 10:16:40 -05:00
Bruce Momjian
9f949c9cb2 doc PG 17 relnotes: add links to commits
Copied from SGML comments.

Discussion: https://postgr.es/m/CACJufxF+9YCDce5vzmZY6ZCEeTJsYFG-kPohT9UjKikJX30mGw@mail.gmail.com

Author: jian he

Backpatch-through: 17 only
2024-09-13 09:42:39 -04:00
Amit Langote
32ccfa0476 SQL/JSON: Update example in JSON_QUERY() documentation
Commit 2c27346ed684 fixed the behavior of JSON_QUERY() when WITH
CONDITIONAL WRAPPER is used, but the documentation example wasn't
updated to reflect this change. This commit updates the example to
show the correct result.

Per off-list report from Andreas Ulbrich.

Backpatch-through: 17
2024-09-13 16:10:01 +09:00
Nathan Bossart
6ea7f04b73 Reintroduce support for sequences in pgstattuple and pageinspect.
Commit 4b82664156 restricted a number of functions provided by
contrib modules to only relations that use the "heap" table access
method.  Sequences always use this table access method, but they do
not advertise as such in the pg_class system catalog, so the
aforementioned commit also (presumably unintentionally) removed
support for sequences from some of these functions.  This commit
reintroduces said support for sequences to these functions and adds
a couple of relevant tests.

Co-authored-by: Ayush Vatsa
Reviewed-by: Robert Haas, Michael Paquier, Matthias van de Meent
Discussion: https://postgr.es/m/CACX%2BKaP3i%2Bi9tdPLjF5JCHVv93xobEdcd_eB%2B638VDvZ3i%3DcQA%40mail.gmail.com
Backpatch-through: 12
2024-09-12 16:31:29 -05:00
Tom Lane
cc4fdfa411 Make jsonpath .string() be immutable for datetimes.
Discussion of commit ed055d249 revealed that we don't actually
want jsonpath's .string() method to depend on DateStyle, nor
TimeZone either, because the non-"_tz" jsonpath functions are
supposed to be immutable.  Potentially we could allow a TimeZone
dependency in the "_tz" variants, but it seems better to just
uniformly define this method as returning the same string that
jsonb text output would do.  That's easier to implement too,
saving a couple dozen lines.

Patch by me, per complaint from Peter Eisentraut.  Back-patch
to v17 where this feature came in (in 66ea94e8e).  Also
back-patch ed055d249 to provide test cases.

Discussion: https://postgr.es/m/5e8879d0-a3c8-4be2-950f-d83aa2af953a@eisentraut.org
2024-09-12 14:30:29 -04:00
David Rowley
2645f6d643 Doc: alphabetize aggregate function table
A few recent JSON aggregates have been added without much consideration
to the existing order.  Put these back in alphabetical order (with the
exception of the JSONB variant of each JSON aggregate).

Author: Wolfgang Walther <walther@technowledgy.de>
Reviewed-by: Marlene Reiterer <marlene.reiterer.03@gmail.com>
Discussion: https://postgr.es/m/6a7b910c-3feb-4006-b817-9b4759cb6bb6%40technowledgy.de
Backpatch-through: 16, where these aggregates were added
2024-09-12 22:37:27 +12:00
Amit Langote
2c27346ed6 SQL/JSON: Fix JSON_QUERY(... WITH CONDITIONAL WRAPPER)
Currently, when WITH CONDITIONAL WRAPPER is specified, array wrappers
are applied even to a single SQL/JSON item if it is a scalar JSON
value, but this behavior does not comply with the standard.

To fix, apply wrappers only when there are multiple SQL/JSON items
in the result.

Reported-by: Peter Eisentraut <peter@eisentraut.org>
Author: Peter Eisentraut <peter@eisentraut.org>
Author: Amit Langote <amitlangote09@gmail.com>
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Discussion: https://postgr.es/m/8022e067-818b-45d3-8fab-6e0d94d03626%40eisentraut.org
Backpatch-through: 17
2024-09-12 09:39:42 +09:00
Tom Lane
7f88e50b45 Remove incorrect Assert.
check_agglevels_and_constraints() asserted that if we find an
aggregate function in an EXPR_KIND_FROM_SUBSELECT expression, the
expression must be in a LATERAL subquery.  Alexander Lakhin found a
case where that's not so: because of the odd scoping rules for NEW/OLD
within a rule, a reference to NEW/OLD could cause an aggregate to be
considered top-level even though it's in an unmarked sub-select.
The error message that would be thrown seems sufficiently on-point,
so just remove the Assert.  (Hence, this is not a bug for production
builds.)

This Assert was added by me in commit eaccfded9 (9.3 era).  It looks
like I put it in to cross-check that the new logic for detecting
misplaced aggregates (using agglevelsup) caught the same cases that a
previous check on p_lateral_active did.  So there might have been some
related misbehavior before eaccfded9 ... but that's very ancient
history by now, so I didn't dig any deeper.

Per bug #18608 from Alexander Lakhin.  Back-patch to all supported
branches.

Discussion: https://postgr.es/m/18608-48de0717508ee429@postgresql.org
2024-09-11 11:41:59 -04:00
Magnus Hagander
7748c847c3 pg_createsubscriber: minor documentation fixes 2024-09-11 16:31:30 +02:00
Tomas Vondra
78bc5f7118 Fix unique key checks in JSON object constructors
When building a JSON object, the code builds a hash table of keys, to
allow checking if the keys are unique. The uniqueness check and adding
the new key happens in json_unique_check_key(), but this assumes the
pointer to the key remains valid.

Unfortunately, two places passed pointers to keys in a buffer, while
also appending more data (additional key/value pairs) to the buffer.
With enough data the buffer is resized by enlargeStringInfo(), which
calls repalloc(), invalidating the earlier key pointers.

Due to this the uniqueness check may fail with both false negatives and
false positives, producing JSON objects with duplicate keys or failing
to produce a perfectly valid JSON object.

This affects multiple functions that enforce uniqueness of keys, all
introduced in PG16 with the new SQL/JSON:

- json_object_agg_unique / jsonb_object_agg_unique
- json_object / jsonb_objectagg

Existing regression tests did not detect the issue, simply because the
initial buffer size is 1024 and the objects were small enough not to
require the repalloc.

With a sufficiently large object, AddressSanitizer reported the access
to invalid memory immediately. So would valgrind, of course.

Fixed by copying the key into the hash table memory context, and adding
regression tests with enough data to repalloc the buffer. Backpatch to
16, where the functions were introduced.

Reported by Alexander Lakhin. Investigation and initial fix by Junwang
Zhao, with various improvements and tests by me.

Reported-by: Alexander Lakhin
Author: Junwang Zhao, Tomas Vondra
Backpatch-through: 16
Discussion: https://postgr.es/m/18598-3279ed972a2347c7@postgresql.org
Discussion: https://postgr.es/m/CAEG8a3JjH0ReJF2_O7-8LuEbO69BxPhYeXs95_x7+H9AMWF1gw@mail.gmail.com
2024-09-11 13:21:30 +02:00
Tom Lane
946f150aa1 Fix some whitespace issues in XMLSERIALIZE(... INDENT).
We must drop whitespace while parsing the input, else libxml2
will include "blank" nodes that interfere with the desired
indentation behavior.  The end result is that we didn't indent
nodes separated by whitespace.

Also, it seems that libxml2 may add a trailing newline when working
in DOCUMENT mode.  This is semantically insignificant, so strip it.

This is in the gray area between being a bug fix and a definition
change.  However, the INDENT option is still pretty new (since v16),
so I think we can get away with changing this in stable branches.
Hence, back-patch to v16.

Jim Jones

Discussion: https://postgr.es/m/872865a8-548b-48e1-bfcd-4e38e672c1e4@uni-muenster.de
2024-09-10 16:20:31 -04:00
Amit Langote
77aebe9a8d SQL/JSON: Avoid initializing unnecessary ON ERROR / ON EMPTY steps
When the ON ERROR / ON EMPTY behavior is to return NULL, returning
NULL directly from ExecEvalJsonExprPath() suffices. Therefore, there's
no need to create separate steps to check the error/empty flag or
those to evaluate the the constant NULL expression.  This speeds up
common cases because the default ON ERROR / ON EMPTY behavior for
JSON_QUERY() and JSON_VALUE() is to return NULL.  However, these steps
are necessary if the RETURNING type is a domain, as constraints on the
domain may need to be checked.

Reported-by: Jian He <jian.universality@gmail.com>
Author: Jian He <jian.universality@gmail.com>
Author: Amit Langote <amitlangote09@gmail.com>
Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com
Backpatch-through: 17
2024-09-09 15:59:06 +09:00
Michael Paquier
cd6b2ae3e7 Fix waits of REINDEX CONCURRENTLY for indexes with predicates or expressions
As introduced by f9900df5f94, a REINDEX CONCURRENTLY job done for an
index with predicates or expressions would set PROC_IN_SAFE_IC in its
MyProc->statusFlags, causing it to be ignored by other concurrent
operations.

Such concurrent index rebuilds should never be ignored, as a predicate
or an expression could call a user-defined function that accesses a
different table than the table where the index is rebuilt.

A test that uses injection points is added, backpatched down to 17.
Michail has proposed a different test, but I have added something
simpler with more coverage.

Oversight in f9900df5f949.

Author: Michail Nikolaev
Discussion: https://postgr.es/m/CANtu0oj9A3kZVduFTG0vrmGnKB+DCHgEpzOp0qAyOgmks84j0w@mail.gmail.com
Backpatch-through: 14
2024-09-09 13:49:59 +09:00
Tom Lane
e69030cb51 Fix incorrect pg_stat_io output on 32-bit machines.
pg_stat_get_io() applied TimestampTzGetDatum twice to the
stat_reset_timestamp value.  On 64-bit builds that's harmless because
TimestampTzGetDatum is a no-op, but on 32-bit builds it results in
displaying garbage in the stats_reset column of the pg_stat_io view.

Bug dates to commit a9c70b46d which introduced pg_stat_io, so
back-patch to v16 where that came in.

Bertrand Drouvot

Discussion: https://postgr.es/m/Ztrd+XcPTz1zorkg@ip-10-97-1-34.eu-west-3.compute.internal
2024-09-06 11:58:10 -04:00
Amit Langote
446d5ad7ae SQL/JSON: Fix default ON ERROR behavior for JSON_TABLE
Use EMPTY ARRAY instead of EMPTY.

This change does not affect the runtime behavior of JSON_TABLE(),
which continues to return an empty relation ON ERROR. It only alters
whether the default ON ERROR behavior is shown in the deparsed output.

Reported-by: Jian He <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com
Backpatch-through: 17
2024-09-06 13:30:42 +09:00
Amit Langote
cd680b3921 SQL/JSON: Fix JSON_TABLE() column deparsing
The deparsing code in get_json_expr_options() unnecessarily emitted
the default column-specific ON ERROR / EMPTY behavior when the
top-level ON ERROR behavior in JSON_TABLE was set to ERROR. Fix that
by not overriding the column-specific default, determined based on
the column's JsonExprOp in get_json_table_columns(), with
JSON_BEHAVIOR_ERROR when that is the top-level ON ERROR behavior.

Note that this only removes redundancy; the current deparsing output
is not incorrect, just redundant.

Reviewed-by: Jian He <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com
Backpatch-through: 17
2024-09-06 13:30:40 +09:00
Amit Langote
eef5195f30 Revert recent SQL/JSON related commits
Reverts c88ce386c4d, 5067c230b8e, and  e4e27976a68, because a few BF
animals didn't like one or all of them.
2024-09-06 12:52:39 +09:00
Amit Langote
e4e27976a6 SQL/JSON: Avoid initializing unnecessary ON ERROR / ON EMPTY steps
When the ON ERROR / ON EMPTY behavior is to return NULL, returning
NULL directly from ExecEvalJsonExprPath() suffices. Therefore, there's
no need to create separate steps to check the error/empty flag or
those to evaluate the the constant NULL expression.  This speeds up
common cases because the default ON ERROR / ON EMPTY behavior for
JSON_QUERY() and JSON_VALUE() is to return NULL.  However, these steps
are necessary if the RETURNING type is a domain, as constraints on the
domain may need to be checked.

Reported-by: Jian He <jian.universality@gmail.com>
Author: Jian He <jian.universality@gmail.com>
Author: Amit Langote <amitlangote09@gmail.com>
Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com
Backpatch-through: 17
2024-09-06 12:04:29 +09:00
Amit Langote
5067c230b8 SQL/JSON: Fix default ON ERROR behavior for JSON_TABLE
Use EMPTY ARRAY instead of EMPTY.

This change does not affect the runtime behavior of JSON_TABLE(),
which continues to return an empty relation ON ERROR. It only alters
whether the default ON ERROR behavior is shown in the deparsed output.

Reported-by: Jian He <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com
Backpatch-through: 17
2024-09-06 12:01:49 +09:00
Amit Langote
c88ce386c4 SQL/JSON: Fix JSON_TABLE() column deparsing
The deparsing code in get_json_expr_options() unnecessarily emitted
the default column-specific ON ERROR / EMPTY behavior when the
top-level ON ERROR behavior in JSON_TABLE was set to ERROR. Fix that
by not overriding the column-specific default, determined based on
the column's JsonExprOp in get_json_table_columns(), with
JSON_BEHAVIOR_ERROR when that is the top-level ON ERROR behavior.

Note that this only removes redundancy; the current deparsing output
is not incorrect, just redundant.

Reviewed-by: Jian He <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com
Backpatch-through: 17
2024-09-06 12:01:49 +09:00
Amit Langote
fe32343814 Update comment about ExprState.escontext
The updated comment provides more helpful guidance by mentioning that
escontext should be set when soft error handling is needed.

Reported-by: Jian He <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com
Backpatch-through: 17
2024-09-06 12:01:49 +09:00
Bruce Momjian
35afec71ae doc PG 17 relnotes: remove tab complete for MERGE/SPLIT partit.
commit 60ae37a8b

Backpatch-through: 17 only
2024-09-05 21:49:02 -04:00
Tom Lane
7dcbf0afa2 Prevent mis-encoding of "trailing junk after numeric literal" errors.
Since commit 2549f0661, we reject an identifier immediately following
a numeric literal (without separating whitespace), because that risks
ambiguity with hex/octal/binary integers.  However, that patch used
token patterns like "{integer}{ident_start}", which is problematic
because {ident_start} matches only a single byte.  If the first
character after the integer is a multibyte character, this ends up
with flex reporting an error message that includes a partial multibyte
character.  That can cause assorted bad-encoding problems downstream,
both in the report to the client and in the postmaster log file.

To fix, use {identifier} not {ident_start} in the "junk" token
patterns, so that they will match complete multibyte characters.
This seems generally better user experience quite aside from the
encoding problem: for "123abc" the error message will now say that
the error appeared at or near "123abc" instead of "123a".

While at it, add some commentary about why these patterns exist
and how they work.

Report and patch by Karina Litskevich; review by Pavel Borisov.
Back-patch to v15 where the problem came in.

Discussion: https://postgr.es/m/CACiT8iZ_diop=0zJ7zuY3BXegJpkKK1Av-PU7xh0EDYHsa5+=g@mail.gmail.com
2024-09-05 12:42:33 -04:00
Michael Paquier
07b828e9d4 Fix inconsistent LWLock tranche name "CommitTsSLRU"
This term was using an inconsistent casing between the code and the
documentation, using "CommitTsSLRU" in wait_event_names.txt and
"CommitTSSLRU" in the code.

Let's update the term in the code to reflect what's in the
documentation, "CommitTs" being more commonly used, so as
pg_stat_activity shows the same term as the documentation.

Oversight in 53c2a97a9266.

Author: Alexander Lakhin
Discussion: https://postgr.es/m/f7e514cf-2446-21f1-a5d2-8c089a6e2168@gmail.com
Backpatch-through: 17
2024-09-04 10:22:19 +09:00
Michael Paquier
bab1fd9277 Avoid installcheck failure in TAP tests using injection_points
These tests depend on the test module injection_points to be installed,
but it may not be available as the contents of src/test/modules/ are not
installed by default.

This commit adds a workaround based on a scan of pg_available_extensions
to check if the extension is available, skipping the test if it is not.
This allows installcheck to work transparently.

There are more tests impacted by this problem on HEAD, but for now this
addresses only the tests that exist on HEAD and v17 as the release is
close by.

Reported-by: Maxim Orlov
Discussion: https://postgr.es/m/CACG=ezZkoT-pFz6a9XnyToiuR-Wg8fGELqHLoyBodr+2h-77qA@mail.gmail.com
Backpatch-through: 17
2024-09-04 08:56:28 +09:00
Michael Paquier
ff43b5e70d Simplify makefiles exporting twice enable_injection_points
This is confusing, as it exports twice the same variable.  Oversight in
6782709df81f that has spread in more places afterwards.

Reported-by: Alvaro Herrera, Tom Lane
Discussion: https://postgr.es/m/202408201630.mn6vbohjh7hh@alvherre.pgsql
Backpatch-through: 17
2024-09-04 08:05:56 +09:00
Tom Lane
94f1474e60 Stamp 17rc1. REL_17_RC1 2024-09-02 16:11:07 -04:00
Peter Eisentraut
c96176c48d Fix warnings from msgfmt
/usr/bin/msgfmt: po/fr.po: warning: PO file header fuzzy
                           warning: older versions of msgfmt will give an error on this

Apparently, not all versions of msgfmt produce this.  Quick fix for
now, more to be researched later.
2024-09-02 18:03:47 +02:00
Peter Eisentraut
e6ec1d6aab Fix rarely-run test for message wording change
fixup for 2e6a8047f0

Reported-by: Nazir Bilal Yavuz <byavuz81@gmail.com>
2024-09-02 17:47:18 +02:00
Peter Eisentraut
986a3ac497 Translation updates
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: d0110df9f34c2d32cb2652d4477c3135dabe84f7
2024-09-02 12:02:42 +02:00
Thomas Munro
3ed3683618 Fix unfairness in all-cached parallel seq scan.
Commit b5a9b18c introduced block streaming infrastructure with a special
fast path for all-cached scans, and commit b7b0f3f2 connected the
infrastructure up to sequential scans.  One of the fast path
micro-optimizations had an unintended consequence: it interfered with
parallel sequential scan's block range allocator (from commit 56788d21),
which has its own ramp-up and ramp-down algorithm when handing out
groups of pages to workers.  A scan of an all-cached table could give
extra blocks to one worker, when others had finished.  In some plans
(probably already very bad plans, such as the one reported by
Alexander), the unfairness could be magnified.

An internal buffer of 16 block numbers is removed, keeping just a single
block buffer for technical reasons.

Back-patch to 17.

Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/63a63690-dd92-c809-0b47-af05459e95d1%40gmail.com
2024-08-31 17:29:30 +12:00