1
0
mirror of https://github.com/postgres/postgres.git synced 2025-12-22 17:42:17 +03:00
Commit Graph

5082 Commits

Author SHA1 Message Date
Dean Rasheed
9428c001f6 Speed up numeric division by always using the "fast" algorithm.
Formerly there were two internal functions in numeric.c to perform
numeric division, div_var() and div_var_fast(). div_var() performed
division exactly to a specified rscale using Knuth's long division
algorithm, while div_var_fast() used the algorithm from the "FM"
library, which approximates each quotient digit using floating-point
arithmetic, and computes a truncated quotient with DIV_GUARD_DIGITS
extra digits. div_var_fast() could be many times faster than
div_var(), but did not guarantee correct results in all cases, and was
therefore only suitable for use in transcendental functions, where
small errors are acceptable.

This commit merges div_var() and div_var_fast() together into a single
function with an extra "exact" boolean parameter, which can be set to
false if the caller is OK with an approximate result. The new function
uses the faster algorithm from the "FM" library, except that when
"exact" is true, it does not truncate the computation with
DIV_GUARD_DIGITS extra digits, but instead performs the full-precision
computation, subtracting off complete multiples of the divisor for
each quotient digit. However, it is able to retain most of the
performance benefits of div_var_fast(), by delaying the propagation of
carries, allowing the inner loop to be auto-vectorized.

Since this may still lead to an inaccurate result, when "exact" is
true, it then inspects the remainder and uses that to adjust the
quotient, if necessary, to make it correct. In practice, the quotient
rarely needs to be adjusted, and never by more than one in the final
digit, though it's difficult to prove that, so the code allows for
larger adjustments, just in case.

In addition, use base-NBASE^2 arithmetic and a 64-bit dividend array,
similar to mul_var(), so that the number of iterations of the outer
loop is roughly halved. Together with the faster algorithm, this makes
div_var() up to around 20 times as fast as the old Knuth algorithm
when "exact" is true, and up to 2 or 3 times as fast as the old
div_var_fast() function when "exact" is false.

Dean Rasheed, reviewed by Joel Jacobson.

Discussion: https://postgr.es/m/CAEZATCVHR10BPDJSANh0u2+Sg6atO3mD0G+CjKDNRMD-C8hKzQ@mail.gmail.com
2024-10-04 09:49:24 +01:00
Fujii Masao
17cc5f666f Fix inconsistent reporting of checkpointer stats.
Previously, the pg_stat_checkpointer view and the checkpoint completion
log message could show different numbers for buffers written
during checkpoints. The view only counted shared buffers,
while the log message included both shared and SLRU buffers,
causing inconsistencies.

This commit resolves the issue by updating both the view and the log message
to separately report shared and SLRU buffers written during checkpoints.
A new slru_written column is added to the pg_stat_checkpointer view
to track SLRU buffers, while the existing buffers_written column now
tracks only shared buffers. This change would help users distinguish
between the two types of buffers, in the pg_stat_checkpointer view and
the checkpoint complete log message, respectively.

Bump catalog version.

Author: Nitin Jadhav
Reviewed-by: Bharath Rupireddy, Michael Paquier, Kyotaro Horiguchi, Robert Haas
Reviewed-by: Andres Freund, vignesh C, Fujii Masao
Discussion: https://postgr.es/m/CAMm1aWb18EpT0whJrjG+-nyhNouXET6ZUw0pNYYAe+NezpvsAA@mail.gmail.com
2024-10-02 11:17:47 +09:00
Fujii Masao
559efce1d6 Add num_done counter to the pg_stat_checkpointer view.
Checkpoints can be skipped when the server is idle. The existing num_timed and
num_requested counters in pg_stat_checkpointer track both completed and
skipped checkpoints, but there was no way to count only the completed ones.

This commit introduces the num_done counter, which tracks only completed
checkpoints, making it easier to see how many were actually performed.

Bump catalog version.

Author: Anton A. Melnikov
Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/9ea77f40-818d-4841-9dee-158ac8f6e690@oss.nttdata.com
2024-09-30 11:56:05 +09:00
Tom Lane
147bbc90f7 Modernize to_char's Roman-numeral code, fixing overflow problems.
int_to_roman() only accepts plain "int" input, which is fine since
we're going to produce '###############' for any value above 3999
anyway.  However, the numeric and int8 variants of to_char() would
throw an error if the given input exceeded the integer range, while
the float-input variants invoked undefined-per-C-standard behavior.
Fix things so that you uniformly get '###############' for out of
range input.

Also add test cases covering this code, plus the equally-untested
EEEE, V, and PL format codes.

Discussion: https://postgr.es/m/2956175.1725831136@sss.pgh.pa.us
2024-09-26 11:02:31 -04:00
Jeff Davis
ac30021356 Allow length=-1 for NUL-terminated input to pg_strncoll(), etc.
Like ICU, allow a length of -1 to be specified for NUL-terminated
arguments to pg_strncoll(), pg_strnxfrm(), and pg_strnxfrm_prefix().

Simplifies the code and comments.

Discussion: https://postgr.es/m/2d758e07dff26bcc7cbe2aec57431329bfe3679a.camel@j-davis.com
2024-09-24 15:15:18 -07:00
Jeff Davis
ceeaaed87a Tighten up make_libc_collator() and make_icu_collator().
Ensure that error paths within these functions do not leak a collator,
and return the result rather than using an out parameter. (Error paths
in the caller may still result in a leaked collator, which will be
addressed separately.)

In make_libc_collator(), if the first newlocale() succeeds and the
second one fails, close the first locale_t object.

The function make_icu_collator() doesn't have any external callers, so
change it to be static.

Discussion: https://postgr.es/m/54d20e812bd6c3e44c10eddcd757ec494ebf1803.camel@j-davis.com
2024-09-24 12:01:45 -07:00
Tom Lane
cd838e2008 Neaten up our choices of SQLSTATEs for XML-related errors.
When our XML-handling modules were first written, the SQL standard
lacked any error codes that were particularly intended for XML
error conditions.  Unsurprisingly, this led to some rather random
choices of errcodes in those modules.  Now the standard has a whole
SQLSTATE class, "Class 10 - XQuery Error", with a reasonably large
selection of relevant-looking errcodes.

In this patch I've chosen one fairly generic code defined by the
standard, 10608 = invalid_argument_for_xquery, and used it where
it seemed appropriate.  I've also made an effort to replace
ERRCODE_INTERNAL_ERROR everywhere it was not clearly reporting
a coding problem; in particular, many of the existing uses look
like they can fairly be reported as ERRCODE_OUT_OF_MEMORY.

It might be interesting to try to map libxml2's error codes into
the standard's new collection, but I've not undertaken that here.

Discussion: https://postgr.es/m/417250.1726341268@sss.pgh.pa.us
2024-09-24 12:59:56 -04:00
Michael Paquier
b14e9ce7d5 Extend PgStat_HashKey.objid from 4 to 8 bytes
This opens the possibility to define keys for more types of statistics
kinds in PgStat_HashKey, the first case being 8-byte query IDs for
statistics like pg_stat_statements.

This increases the size of PgStat_HashKey from 12 to 16 bytes, while
PgStatShared_HashEntry, entry stored in the dshash for pgstats, keeps
the same size due to alignment.

xl_xact_stats_item, that tracks the stats items to drop in commit WAL
records, is increased from 12 to 16 bytes.  Note that individual chunks
in commit WAL records should be multiples of sizeof(int), hence 8-byte
object IDs are stored as two uint32, based on a suggestion from Heikki
Linnakangas.

While on it, the field of PgStat_HashKey is renamed from "objoid" to
"objid", as for some stats kinds this field does not refer to OIDs but
just IDs, like for replication slot stats.

This commit bumps the following format variables:
- PGSTAT_FILE_FORMAT_ID, as PgStat_HashKey is written to the stats file
for non-serialized stats kinds in the dshash table.
- XLOG_PAGE_MAGIC for the changes in xl_xact_stats_item.
- Catalog version, for the SQL function pg_stat_have_stats().

Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/ZsvTS9EW79Up8I62@paquier.xyz
2024-09-18 12:44:15 +09:00
Peter Eisentraut
89f908a6d0 Add temporal FOREIGN KEY contraints
Add PERIOD clause to foreign key constraint definitions.  This is
supported for range and multirange types.  Temporal foreign keys check
for range containment instead of equality.

This feature matches the behavior of the SQL standard temporal foreign
keys, but it works on PostgreSQL's native ranges instead of SQL's
"periods", which don't exist in PostgreSQL (yet).

Reference actions ON {UPDATE,DELETE} {CASCADE,SET NULL,SET DEFAULT}
are not supported yet.

(previously committed as 34768ee361, reverted by 8aee330af55; this is
essentially unchanged from those)

Author: Paul A. Jungwirth <pj@illuminatedcomputing.com>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: jian he <jian.universality@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/CA+renyUApHgSZF9-nd-a0+OPGharLQLO=mDHcY4_qQ0+noCUVg@mail.gmail.com
2024-09-17 11:29:30 +02:00
Peter Eisentraut
fc0438b4e8 Add temporal PRIMARY KEY and UNIQUE constraints
Add WITHOUT OVERLAPS clause to PRIMARY KEY and UNIQUE constraints.
These are backed by GiST indexes instead of B-tree indexes, since they
are essentially exclusion constraints with = for the scalar parts of
the key and && for the temporal part.

(previously committed as 46a0cd4cef, reverted by 46a0cd4cefb; the new
part is this:)

Because 'empty' && 'empty' is false, the temporal PK/UQ constraint
allowed duplicates, which is confusing to users and breaks internal
expectations.  For instance, when GROUP BY checks functional
dependencies on the PK, it allows selecting other columns from the
table, but in the presence of duplicate keys you could get the value
from any of their rows.  So we need to forbid empties.

This all means that at the moment we can only support ranges and
multiranges for temporal PK/UQs, unlike the original patch (above).
Documentation and tests for this are added.  But this could
conceivably be extended by introducing some more general support for
the notion of "empty" for other types.

Author: Paul A. Jungwirth <pj@illuminatedcomputing.com>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: jian he <jian.universality@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/CA+renyUApHgSZF9-nd-a0+OPGharLQLO=mDHcY4_qQ0+noCUVg@mail.gmail.com
2024-09-17 11:29:30 +02:00
Tom Lane
d5622acb32 Replace usages of xmlXPathCompile() with xmlXPathCtxtCompile().
In existing releases of libxml2, xmlXPathCompile can be driven
to stack overflow because it fails to protect itself against
too-deeply-nested input.  While there is an upstream fix as of
yesterday, it will take years for that to propagate into all
shipping versions.  In the meantime, we can protect our own
usages basically for free by calling xmlXPathCtxtCompile instead.

(The actual bug is that libxml2 keeps its nesting counter in the
xmlXPathContext, and its parsing code was willing to just skip
counting nesting levels if it didn't have a context.  So if we supply
a context, all is well.  It seems odd actually that it works at all
to not supply a context, because this means that XPath parsing does
not have access to XML namespace info.  Apparently libxml2 never
checks namespaces until runtime?  Anyway, this seems like good
future-proofing even if its only immediate effect is to dodge a bug.)

Sadly, this hack only offers protection with libxml2 2.9.11 and newer.
Before that there are multiple similar problems, so if you are
processing untrusted XML it behooves you to get a newer version.
But we have some pretty old libxml2 in the buildfarm, so it seems
impractical to add a regression test to verify this fix.

Per bug #18617 from Jingzhou Fu.  Back-patch to all supported
versions.

Discussion: https://postgr.es/m/18617-1cee4d2ed1f4e7ae@postgresql.org
Discussion: https://gitlab.gnome.org/GNOME/libxml2/-/issues/799
2024-09-15 13:33:09 -04:00
Peter Eisentraut
433d8f40e9 Remove separate locale_is_c arguments
Since e9931bfb75, ctype_is_c is part of pg_locale_t.  Some functions
passed a pg_locale_t and a bool argument separately.  This can now be
combined into one argument.

Since some callers call MatchText() with locale 0, it is a bit
confusing whether this is all correct.  But it is the case that only
callers that pass a non-zero locale object to MatchText() end up
checking locale->ctype_is_c.  To make that flow a bit more
understandable, add the locale argument to MATCH_LOWER() and GETCHAR()
in like_match.c, instead of implicitly taking it from the outer scope.

Reviewed-by: Jeff Davis <pgsql@j-davis.com>
Discussion: https://www.postgresql.org/message-id/84d415fc-6780-419e-b16c-61a0ca819e2b@eisentraut.org
2024-09-13 16:10:52 +02:00
Jeff Davis
b0c30612c5 Simplify checks for deterministic collations.
Remove redundant checks for locale->collate_is_c now that we always
have a valid pg_locale_t.

Also, remove pg_locale_deterministic() wrapper, which is no longer
useful after commit e9931bfb75. Just check the field directly,
consistent with other fields in pg_locale_t.

Author: Andreas Karlsson
Discussion: https://postgr.es/m/60929555-4709-40a7-b136-bcb44cff5a3c@proxel.se
2024-09-12 13:35:56 -07:00
Jeff Davis
6a9fc11033 Remove redundant check for default collation.
The operative check is for a deterministic collation, so the check for
DEFAULT_COLLATION is redundant. Furthermore, it will be wrong if we
ever support a non-deterministic default collation.

Author: Andreas Karlsson
Discussion: https://postgr.es/m/60929555-4709-40a7-b136-bcb44cff5a3c@proxel.se
2024-09-12 13:35:49 -07:00
Tom Lane
cb599b9ddf Make jsonpath .string() be immutable for datetimes.
Discussion of commit ed055d249 revealed that we don't actually
want jsonpath's .string() method to depend on DateStyle, nor
TimeZone either, because the non-"_tz" jsonpath functions are
supposed to be immutable.  Potentially we could allow a TimeZone
dependency in the "_tz" variants, but it seems better to just
uniformly define this method as returning the same string that
jsonb text output would do.  That's easier to implement too,
saving a couple dozen lines.

Patch by me, per complaint from Peter Eisentraut.  Back-patch
to v17 where this feature came in (in 66ea94e8e).  Also
back-patch ed055d249 to provide test cases.

Discussion: https://postgr.es/m/5e8879d0-a3c8-4be2-950f-d83aa2af953a@eisentraut.org
2024-09-12 14:30:29 -04:00
Fujii Masao
4eada203a5 Add has_largeobject_privilege function.
This function checks whether a user has specific privileges on a large object,
identified by OID. The user can be provided by name, OID,
or default to the current user. If the specified large object doesn't exist,
the function returns NULL. It raises an error for a non-existent user name.
This behavior is basically consistent with other privilege inquiry functions
like has_table_privilege.

Bump catalog version.

Author: Yugo Nagata
Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/20240702163444.ab586f6075e502eb84f11b1a@sranhm.sraoss.co.jp
2024-09-12 21:51:26 +09:00
Peter Eisentraut
23d0b48468 Remove hardcoded hash opclass function signature exceptions
hashvalidate(), which validates the signatures of support functions
for the hash AM, contained several hardcoded exceptions.  For example,
hash/date_ops support function 1 was hashint4(), which would
ordinarily fail validation because the function argument is int4, not
date.  But this works internally because int4 and date are of the same
size.  There are several more exceptions like this that happen to work
and were allowed historically but would now fail the function
signature validation.

This patch removes those exceptions by providing new support functions
that have the proper declared signatures.  They internally share most
of the code with the "wrong" functions they replace, so the behavior
is still the same.

With the exceptions gone, hashvalidate() is now simplified and relies
fully on check_amproc_signature().

hashvarlena() and hashvarlenaextended() are kept in pg_proc.dat
because some extensions currently use them to build hash functions for
their own types, and we need to keep exposing these functions as
"LANGUAGE internal" functions for that to continue to work.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/29c3b746-69e7-482a-b37c-dbbf7e5b009b@eisentraut.org
2024-09-12 12:57:43 +02:00
Fujii Masao
fefa76f70f Remove old RULE privilege completely.
The RULE privilege for tables was removed in v8.2, but for backward
compatibility, GRANT/REVOKE and privilege functions like
has_table_privilege continued to accept the RULE keyword without
any effect.

After discussions on pgsql-hackers, it was agreed that this compatibility
is no longer needed. Since it's been long enough since the deprecation,
we've decided to fully remove support for RULE privilege,
so GRANT/REVOKE and privilege functions will no longer accept it.

Author: Fujii Masao
Reviewed-by: Nathan Bossart
Discussion: https://postgr.es/m/976a3581-6939-457f-b947-fc3dc836c083@oss.nttdata.com
2024-09-12 19:33:44 +09:00
Amit Langote
e6c45d85dc SQL/JSON: Fix JSON_QUERY(... WITH CONDITIONAL WRAPPER)
Currently, when WITH CONDITIONAL WRAPPER is specified, array wrappers
are applied even to a single SQL/JSON item if it is a scalar JSON
value, but this behavior does not comply with the standard.

To fix, apply wrappers only when there are multiple SQL/JSON items
in the result.

Reported-by: Peter Eisentraut <peter@eisentraut.org>
Author: Peter Eisentraut <peter@eisentraut.org>
Author: Amit Langote <amitlangote09@gmail.com>
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Discussion: https://postgr.es/m/8022e067-818b-45d3-8fab-6e0d94d03626%40eisentraut.org
Backpatch-through: 17
2024-09-12 09:39:56 +09:00
Tomas Vondra
842265631d Fix unique key checks in JSON object constructors
When building a JSON object, the code builds a hash table of keys, to
allow checking if the keys are unique. The uniqueness check and adding
the new key happens in json_unique_check_key(), but this assumes the
pointer to the key remains valid.

Unfortunately, two places passed pointers to keys in a buffer, while
also appending more data (additional key/value pairs) to the buffer.
With enough data the buffer is resized by enlargeStringInfo(), which
calls repalloc(), invalidating the earlier key pointers.

Due to this the uniqueness check may fail with both false negatives and
false positives, producing JSON objects with duplicate keys or failing
to produce a perfectly valid JSON object.

This affects multiple functions that enforce uniqueness of keys, all
introduced in PG16 with the new SQL/JSON:

- json_object_agg_unique / jsonb_object_agg_unique
- json_object / jsonb_objectagg

Existing regression tests did not detect the issue, simply because the
initial buffer size is 1024 and the objects were small enough not to
require the repalloc.

With a sufficiently large object, AddressSanitizer reported the access
to invalid memory immediately. So would valgrind, of course.

Fixed by copying the key into the hash table memory context, and adding
regression tests with enough data to repalloc the buffer. Backpatch to
16, where the functions were introduced.

Reported by Alexander Lakhin. Investigation and initial fix by Junwang
Zhao, with various improvements and tests by me.

Reported-by: Alexander Lakhin
Author: Junwang Zhao, Tomas Vondra
Backpatch-through: 16
Discussion: https://postgr.es/m/18598-3279ed972a2347c7@postgresql.org
Discussion: https://postgr.es/m/CAEG8a3JjH0ReJF2_O7-8LuEbO69BxPhYeXs95_x7+H9AMWF1gw@mail.gmail.com
2024-09-11 13:21:10 +02:00
Tom Lane
52c707483c Use a hash table to de-duplicate column names in ruleutils.c.
Commit 8004953b5 added a hash table to avoid O(N^2) cost in choosing
unique relation aliases while deparsing a view or rule.  It did
nothing about the similar O(N^2) (maybe worse) costs of choosing
unique column aliases within each RTE.  However, that's now
demonstrably a bottleneck when deparsing CHECK constraints for wide
tables, so let's use a similar hash table to handle those.

The extra cost of setting up the hash table will not be repaid unless
the table has many columns.  I've set this up so that we use the brute
force method if there are less than 32 columns.  The exact cutoff is
not too critical, but this value seems good because it results in both
code paths getting exercised by existing regression-test cases.

Patch by me; thanks to David Rowley for review.

Discussion: https://postgr.es/m/2885468.1722291250@sss.pgh.pa.us
2024-09-10 16:49:09 -04:00
Tom Lane
bccca780ee Fix some whitespace issues in XMLSERIALIZE(... INDENT).
We must drop whitespace while parsing the input, else libxml2
will include "blank" nodes that interfere with the desired
indentation behavior.  The end result is that we didn't indent
nodes separated by whitespace.

Also, it seems that libxml2 may add a trailing newline when working
in DOCUMENT mode.  This is semantically insignificant, so strip it.

This is in the gray area between being a bug fix and a definition
change.  However, the INDENT option is still pretty new (since v16),
so I think we can get away with changing this in stable branches.
Hence, back-patch to v16.

Jim Jones

Discussion: https://postgr.es/m/872865a8-548b-48e1-bfcd-4e38e672c1e4@uni-muenster.de
2024-09-10 16:20:31 -04:00
Richard Guo
247dea89f7 Introduce an RTE for the grouping step
If there are subqueries in the grouping expressions, each of these
subqueries in the targetlist and HAVING clause is expanded into
distinct SubPlan nodes.  As a result, only one of these SubPlan nodes
would be converted to reference to the grouping key column output by
the Agg node; others would have to get evaluated afresh.  This is not
efficient, and with grouping sets this can cause wrong results issues
in cases where they should go to NULL because they are from the wrong
grouping set.  Furthermore, during re-evaluation, these SubPlan nodes
might use nulled column values from grouping sets, which is not
correct.

This issue is not limited to subqueries.  For other types of
expressions that are part of grouping items, if they are transformed
into another form during preprocessing, they may fail to match lower
target items.  This can also lead to wrong results with grouping sets.

To fix this issue, we introduce a new kind of RTE representing the
output of the grouping step, with columns that are the Vars or
expressions being grouped on.  In the parser, we replace the grouping
expressions in the targetlist and HAVING clause with Vars referencing
this new RTE, so that the output of the parser directly expresses the
semantic requirement that the grouping expressions be gotten from the
grouping output rather than computed some other way.  In the planner,
we first preprocess all the columns of this new RTE and then replace
any Vars in the targetlist and HAVING clause that reference this new
RTE with the underlying grouping expressions, so that we will have
only one instance of a SubPlan node for each subquery contained in the
grouping expressions.

Bump catversion because this changes the querytree produced by the
parser.

Thanks to Tom Lane for the idea to invent a new kind of RTE.

Per reports from Geoff Winkless, Tobias Wendorff, Richard Guo from
various threads.

Author: Richard Guo
Reviewed-by: Ashutosh Bapat, Sutou Kouhei
Discussion: https://postgr.es/m/CAMbWs4_dp7e7oTwaiZeBX8+P1rXw4ThkZxh1QG81rhu9Z47VsQ@mail.gmail.com
2024-09-10 12:35:34 +09:00
Tom Lane
218527d014 Don't bother checking the result of SPI_connect[_ext] anymore.
SPI_connect/SPI_connect_ext have not returned any value other than
SPI_OK_CONNECT since commit 1833f1a1c in v10; any errors are thrown
via ereport.  (The most likely failure is out-of-memory, which has
always been thrown that way, so callers had better be prepared for
such errors.)  This makes it somewhat pointless to check these
functions' result, and some callers within our code haven't been
bothering; indeed, the only usage example within spi.sgml doesn't
bother.  So it's likely that the omission has propagated into
extensions too.

Hence, let's standardize on not checking, and document the return
value as historical, while not actually changing these functions'
behavior.  (The original proposal was to change their return type
to "void", but that would needlessly break extensions that are
conforming to the old practice.)  This saves a small amount of
boilerplate code in a lot of places.

Stepan Neretin

Discussion: https://postgr.es/m/CAMaYL5Z9Uk8cD9qGz9QaZ2UBJFOu7jFx5Mwbznz-1tBbPDQZow@mail.gmail.com
2024-09-09 12:18:34 -04:00
Jeff Davis
51edc4ca54 Remove lc_ctype_is_c().
Instead always fetch the locale and look at the ctype_is_c field.

hba.c relies on regexes working for the C locale without needing
catalog access, which worked before due to a special case for
C_COLLATION_OID in lc_ctype_is_c(). Move the special case to
pg_set_regex_collation() now that lc_ctype_is_c() is gone.

Author: Andreas Karlsson
Discussion: https://postgr.es/m/60929555-4709-40a7-b136-bcb44cff5a3c@proxel.se
2024-09-06 13:23:21 -07:00
Tom Lane
129a2f6679 Fix incorrect pg_stat_io output on 32-bit machines.
pg_stat_get_io() applied TimestampTzGetDatum twice to the
stat_reset_timestamp value.  On 64-bit builds that's harmless because
TimestampTzGetDatum is a no-op, but on 32-bit builds it results in
displaying garbage in the stats_reset column of the pg_stat_io view.

Bug dates to commit a9c70b46d which introduced pg_stat_io, so
back-patch to v16 where that came in.

Bertrand Drouvot

Discussion: https://postgr.es/m/Ztrd+XcPTz1zorkg@ip-10-97-1-34.eu-west-3.compute.internal
2024-09-06 11:57:57 -04:00
Peter Eisentraut
9e43ab3dd7 Remove useless unconstify
Digging into the history, this was not necessary even when it was
added, but might have been some time before that.  In any case, there
is no use for this now.
2024-09-06 11:25:48 +02:00
Amit Langote
bbd4c058a8 SQL/JSON: Fix default ON ERROR behavior for JSON_TABLE
Use EMPTY ARRAY instead of EMPTY.

This change does not affect the runtime behavior of JSON_TABLE(),
which continues to return an empty relation ON ERROR. It only alters
whether the default ON ERROR behavior is shown in the deparsed output.

Reported-by: Jian He <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com
Backpatch-through: 17
2024-09-06 13:25:53 +09:00
Amit Langote
ee75a03f37 SQL/JSON: Fix JSON_TABLE() column deparsing
The deparsing code in get_json_expr_options() unnecessarily emitted
the default column-specific ON ERROR / EMPTY behavior when the
top-level ON ERROR behavior in JSON_TABLE was set to ERROR. Fix that
by not overriding the column-specific default, determined based on
the column's JsonExprOp in get_json_table_columns(), with
JSON_BEHAVIOR_ERROR when that is the top-level ON ERROR behavior.

Note that this only removes redundancy; the current deparsing output
is not incorrect, just redundant.

Reviewed-by: Jian He <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com
Backpatch-through: 17
2024-09-06 13:25:47 +09:00
Amit Langote
4d7e24e0f4 Revert recent SQL/JSON related commits
Reverts 68222851d5, 565caaa79a, and 3a97460970, because a few
BF animals didn't like one or all of them.
2024-09-06 12:53:01 +09:00
Amit Langote
565caaa79a SQL/JSON: Fix default ON ERROR behavior for JSON_TABLE
Use EMPTY ARRAY instead of EMPTY.

This change does not affect the runtime behavior of JSON_TABLE(),
which continues to return an empty relation ON ERROR. It only alters
whether the default ON ERROR behavior is shown in the deparsed output.

Reported-by: Jian He <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com
Backpatch-through: 17
2024-09-06 10:14:01 +09:00
Amit Langote
68222851d5 SQL/JSON: Fix JSON_TABLE() column deparsing
The deparsing code in get_json_expr_options() unnecessarily emitted
the default column-specific ON ERROR / EMPTY behavior when the
top-level ON ERROR behavior in JSON_TABLE was set to ERROR. Fix that
by not overriding the column-specific default, determined based on
the column's JsonExprOp in get_json_table_columns(), with
JSON_BEHAVIOR_ERROR when that is the top-level ON ERROR behavior.

Note that this only removes redundancy; the current deparsing output
is not incorrect, just redundant.

Reviewed-by: Jian He <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com
Backpatch-through: 17
2024-09-06 10:13:57 +09:00
Jeff Davis
06421b0843 Remove lc_collate_is_c().
Instead just look up the collation and check collate_is_c field.

Author: Andreas Karlsson
Discussion: https://postgr.es/m/60929555-4709-40a7-b136-bcb44cff5a3c@proxel.se
2024-09-04 14:35:25 -07:00
Amit Kapila
6c2b5edecc Collect statistics about conflicts in logical replication.
This commit adds columns in view pg_stat_subscription_stats to show the
number of times a particular conflict type has occurred during the
application of logical replication changes. The following columns are
added:

confl_insert_exists:
        Number of times a row insertion violated a NOT DEFERRABLE unique
        constraint.
confl_update_origin_differs:
        Number of times an update was performed on a row that was
        previously modified by another origin.
confl_update_exists:
        Number of times that the updated value of a row violates a
        NOT DEFERRABLE unique constraint.
confl_update_missing:
        Number of times that the tuple to be updated is missing.
confl_delete_origin_differs:
        Number of times a delete was performed on a row that was
        previously modified by another origin.
confl_delete_missing:
        Number of times that the tuple to be deleted is missing.

The update_origin_differs and delete_origin_differs conflicts can be
detected only when track_commit_timestamp is enabled.

Author: Hou Zhijie
Reviewed-by: Shveta Malik, Peter Smith, Anit Kapila
Discussion: https://postgr.es/m/OS0PR01MB57160A07BD575773045FC214948F2@OS0PR01MB5716.jpnprd01.prod.outlook.com
2024-09-04 08:55:21 +05:30
Jeff Davis
12d3345c0d Remember last collation to speed up collation cache.
This optimization is to avoid a performance regression in an upcoming
patch that will remove lc_collate_is_c().

Discussion: https://postgr.es/m/96a559be83329bc66074a3925ebcfa8ceb16dfc5.camel@j-davis.com
Discussion: https://postgr.es/m/646f662e145ab38cff1c04d475f4448f53fc5042.camel@j-davis.com
Discussion: https://postgr.es/m/54565933-d82f-4d7c-8f47-288b1b570fd8@eisentraut.org
2024-09-03 16:30:03 -07:00
Michael Paquier
c7cd2d6ed0 Define PG_TBLSPC_DIR for path pg_tblspc/ in data folder
Similarly to 2065ddf5e3, this introduces a define for "pg_tblspc".
This makes the style more consistent with the existing PG_STAT_TMP_DIR,
for example.

There is a difference with the other cases with the introduction of
PG_TBLSPC_DIR_SLASH, required in two places for recovery and backups.

Author: Bertrand Drouvot
Reviewed-by: Ashutosh Bapat, Álvaro Herrera, Yugo Nagata, Michael
Paquier
Discussion: https://postgr.es/m/ZryVvjqS9SnV1GPP@ip-10-97-1-34.eu-west-3.compute.internal
2024-09-03 09:11:54 +09:00
Michael Paquier
c39afc38cf Define PG_LOGICAL_DIR for path pg_logical/ in data folder
This is similar to 2065ddf5e3, but this time for pg_logical/ itself
and its contents, like the paths for snapshots, mappings or origin
checkpoints.

Author: Bertrand Drouvot
Reviewed-by: Ashutosh Bapat, Yugo Nagata, Michael Paquier
Discussion: https://postgr.es/m/ZryVvjqS9SnV1GPP@ip-10-97-1-34.eu-west-3.compute.internal
2024-08-30 15:25:12 +09:00
Michael Paquier
2065ddf5e3 Define PG_REPLSLOT_DIR for path pg_replslot/ in data folder
This commit replaces most of the hardcoded values of "pg_replslot" by a
new PG_REPLSLOT_DIR #define.  This makes the style more consistent with
the existing PG_STAT_TMP_DIR, for example.  More places will follow a
similar change.

Author: Bertrand Drouvot
Reviewed-by: Ashutosh Bapat, Yugo Nagata, Michael Paquier
Discussion: https://postgr.es/m/ZryVvjqS9SnV1GPP@ip-10-97-1-34.eu-west-3.compute.internal
2024-08-30 10:42:21 +09:00
Tom Lane
43f2e7634d Fix mis-deparsing of ORDER BY lists when there is a name conflict.
If an ORDER BY item in SELECT is a bare identifier, the parser
first seeks it as an output column name of the SELECT (for SQL92
compatibility).  However, ruleutils.c is expecting the SQL99
interpretation where such a name is an input column name.  So it's
possible to produce an incorrect display of a view in the (admittedly
pretty ill-advised) case where some other column is renamed in the
SELECT output list to match an ORDER BY column.

This can be fixed by table-qualifying such names in the dumped
view text.  To avoid cluttering less-ill-advised queries, we'd
like to do so only when there's an actual name conflict.
That requires passing the current get_query_def call's resultDesc
parameter down to get_variable, so that it can determine what
the output column names are.  In hopes of reducing rather than
increasing notational clutter in ruleutils.c, I moved that value
into the deparse_context struct and removed it from the parameter
lists of get_query_def's other subroutines.

I made a few other cosmetic changes while at it:
* Likewise move the colNamesVisible parameter into deparse_context.
* Rename deparse_context's windowTList field to targetList,
since it's no longer used only in connection with WINDOW clauses.
* Replace the special_exprkind field with a bool inGroupBy,
since that was all it was being used for, and the apparent
flexibility of storing a ParseExprKind proved to be illusory.
(We need a separate varInOrderBy field to make this patch work.)
* Remove useless save/restore logic in get_select_query_def.

In principle, this bug is quite old.  However, it seems unreachable
before 1b4d280ea, because before that the presence of "new" and "old"
entries in a view's rangetable caused us to always table-qualify every
Var reference in dumped views.  Hence, back-patch to v16 where that
came in.

Per bug #18589 from Quynh Tran.

Discussion: https://postgr.es/m/18589-70091cb81db1a3f1@postgresql.org
2024-08-29 13:24:17 -04:00
Peter Eisentraut
edee0c621d Message style improvements 2024-08-29 14:43:34 +02:00
Dean Rasheed
7cac6307a4 Fix compiler warning in mul_var_short().
Some compilers (e.g., gcc before version 7) mistakenly think "carry"
might be used uninitialized.

Reported by Tom Lane, per various buildfarm members, e.g. arowana.
2024-08-26 11:00:20 +01:00
Alexander Korotkov
3890d90c15 Revert support for ALTER TABLE ... MERGE/SPLIT PARTITION(S) commands
This commit reverts 1adf16b8fb, 87c21bb941, and subsequent fixes and
improvements including df64c81ca9, c99ef1811a, 9dfcac8e15, 885742b9f8,
842c9b2705, fcf80c5d5f, 96c7381c4c, f4fc7cb54b, 60ae37a8bc, 259c96fa8f,
449cdcd486, 3ca43dbbb6, 2a679ae94e, 3a82c689fd, fbd4321fd5, d53a4286d7,
c086896625, 4e5d6c4091, 04158e7fa3.

The reason for reverting is security issues related to repeatable name lookups
(CVE-2014-0062).  Even though 04158e7fa3 solved part of the problem, there
are still remaining issues, which aren't feasible to even carefully analyze
before the RC deadline.

Reported-by: Noah Misch, Robert Haas
Discussion: https://postgr.es/m/20240808171351.a9.nmisch%40google.com
Backpatch-through: 17
2024-08-24 18:48:48 +03:00
Peter Eisentraut
a2bbc58f74 thread-safety: gmtime_r(), localtime_r()
Use gmtime_r() and localtime_r() instead of gmtime() and localtime(),
for thread-safety.

There are a few affected calls in libpq and ecpg's libpgtypes, which
are probably effectively bugs, because those libraries already claim
to be thread-safe.

There is one affected call in the backend.  Most of the backend
otherwise uses the custom functions pg_gmtime() and pg_localtime(),
which are implemented differently.

While we're here, change the call in the backend to gmtime*() instead
of localtime*(), since for that use time zone behavior is irrelevant,
and this side-steps any questions about when time zones are
initialized by localtime_r() vs localtime().

Portability: gmtime_r() and localtime_r() are in POSIX but are not
available on Windows.  Windows has functions gmtime_s() and
localtime_s() that can fulfill the same purpose, so we add some small
wrappers around them.  (Note that these *_s() functions are also
different from the *_s() functions in the bounds-checking extension of
C11.  We are not using those here.)

On MinGW, you can get the POSIX-style *_r() functions by defining
_POSIX_C_SOURCE appropriately before including <time.h>.  This leads
to a conflict at least in plpython because apparently _POSIX_C_SOURCE
gets defined in some header there, and then our replacement
definitions conflict with the system definitions.  To avoid that sort
of thing, we now always define _POSIX_C_SOURCE on MinGW and use the
POSIX-style functions here.

Reviewed-by: Stepan Neretin <sncfmgg@gmail.com>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/eba1dc75-298e-4c46-8869-48ba8aad7d70@eisentraut.org
2024-08-23 07:43:04 +02:00
Jeff Davis
a839567784 Fix obsolete comments in varstr_cmp(). 2024-08-21 09:19:21 -07:00
Robert Haas
2b03cfeea4 Fix pgindent damage
Oversight in commit a95ff1fe2e
2024-08-21 09:58:11 -04:00
Jeff Davis
a95ff1fe2e Slightly refactor varstr_sortsupport() to improve readability.
Author: Andreas Karlsson
Discussion: https://postgr.es/m/69c2a864-846f-4309-bd5a-aaa1c34f9a11@proxel.se
2024-08-20 15:32:39 -07:00
Thomas Munro
2724ff381a Fix harmless LC_COLLATE[_MASK] confusion.
Commit ca051d8b10 called newlocale(LC_COLLATE, ...) instead of
newlocale(LC_COLLATE_MASK, ...), in code reached only on FreeBSD.  They
have the same value on that OS, explaining why it worked.  Fix.

Back-patch to 14, where ca051d8b10 landed.
2024-08-19 22:12:55 +12:00
Nathan Bossart
1d80d6b50e Further reduce dependence on -fwrapv semantics in jsonb.
Commit 108d2adb9e missed updating a few places in the jsonb code
that rely on signed integer wrapping for correctness.  These can
also be fixed by using pg_abs_s32() to negate a signed integer
(that is known to be negative) for comparison with an unsigned
integer.

Reported-by: Alexander Lakhin
Discussion: https://postgr.es/m/bfff906f-300d-81ea-83b7-f2c93845e7f2%40gmail.com
2024-08-16 15:06:40 -05:00
Tom Lane
6be39d77a7 Fix extraction of week and quarter fields from intervals.
"EXTRACT(WEEK FROM interval_value)" formerly threw an error.
Define it as "tm->tm_mday / 7".  (With C99 division semantics,
this gives consistent results for negative intervals.)

"EXTRACT(QUARTER FROM interval_value)" has been implemented
all along, but it formerly gave extremely strange results for
negative intervals.  Fix it so that the output for -N months
is the negative of the output for N months.

Per bug #18348 from Michael Bondarenko and subsequent discussion.

Discussion: https://postgr.es/m/18348-b097a3587dfde8a4@postgresql.org
2024-08-16 12:35:53 -04:00
Nathan Bossart
108d2adb9e Remove dependence on -fwrapv semantics in jsonb.
This commit updates a couple of places in the jsonb code to no
longer rely on signed integer wrapping for correctness.  Like
commit 9e9a2b7031, this is intended to move us closer towards
removing -fwrapv, which may enable some compiler optimizations.
However, there is presently no plan to actually remove that
compiler option in the near future.

This commit makes use of the newly introduced pg_abs_s32() routine
to negate a signed integer (that is known to be negative) for
comparison with an unsigned integer.  In passing, change one use of
INT_MIN to the more portable PG_INT32_MIN.

Reported-by: Alexander Lakhin
Author: Joseph Koshakow
Reviewed-by: Jian He
Discussion: https://postgr.es/m/CAAvxfHdBPOyEGS7s%2Bxf4iaW0-cgiq25jpYdWBqQqvLtLe_t6tw%40mail.gmail.com
2024-08-16 11:24:44 -05:00