1
0
mirror of https://github.com/postgres/postgres.git synced 2025-12-22 17:42:17 +03:00
Commit Graph

8630 Commits

Author SHA1 Message Date
Michael Paquier
22e3b55805 Switch some system functions to use get_call_result_type()
This shaves some code by replacing the combinations of
CreateTemplateTupleDesc()/TupleDescInitEntry() hardcoding a mapping of
the attributes listed in pg_proc.dat by get_call_result_type() to build
the TupleDesc needed for the rows generated.

get_call_result_type() is more expensive than the former style, but this
removes some duplication with the lists of OUT parameters (pg_proc.dat
and the attributes hardcoded in these code paths).  This is applied to
functions that are not considered as critical (aka that could be called
repeatedly for monitoring purposes).

Author: Bharath Rupireddy
Reviewed-by: Robert Haas, Álvaro Herrera, Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/CALj2ACV23HW5HP5hFjd89FNS-z5X8r2jNXdMXcpN2BgTtKd87w@mail.gmail.com
2022-12-21 10:11:22 +09:00
Andrew Dunstan
8284cf5f74 Add copyright notices to meson files
Discussion: https://postgr.es/m/222b43a5-2fb3-2c1b-9cd0-375d376c8246@dunslane.net
2022-12-20 07:54:39 -05:00
David Rowley
3226f47282 Add enable_presorted_aggregate GUC
1349d279 added query planner support to allow more efficient execution of
aggregate functions which have an ORDER BY or a DISTINCT clause.  Prior to
that commit, the planner would only request that the lower planner produce
a plan with the order required for the GROUP BY clause and it would be
left up to nodeAgg.c to perform the final sort of records within each
group so that the aggregate transition functions were called in the
correct order.  Now that the planner requests the lower planner produce a
plan with the GROUP BY and the ORDER BY / DISTINCT aggregates in mind,
there is the possibility that the planner chooses a plan which could be
less efficient than what would have been produced before 1349d279.

While developing 1349d279, I had in mind that Incremental Sort would help
us in cases where an index exists only on the GROUP BY column(s).
Incremental Sort would just replace the implicit tuplesorts which are
being performed in nodeAgg.c.  However, because the planner has the
flexibility to instead choose a plan which just performs a full sort on
both the GROUP BY and ORDER BY / DISTINCT aggregate columns, there is
potential for the planner to make a bad choice.  The costing for
Incremental Sort is not perfect as it assumes an even distribution of rows
to sort within each sort group.

Here we add an escape hatch in the form of the enable_presorted_aggregate
GUC.  This will allow users to get the pre-PG16 behavior in cases where
they have no other means to convince the query planner to produce a plan
which only sorts on the GROUP BY column(s).

Discussion: https://postgr.es/m/CAApHDvr1Sm+g9hbv4REOVuvQKeDWXcKUAhmbK5K+dfun0s9CvA@mail.gmail.com
2022-12-20 22:28:58 +13:00
David Rowley
d21ded75fd Improve the performance of the slab memory allocator
Slab has traditionally been fairly slow when compared with the AllocSet or
Generation memory allocators.  Part of this slowness came from having to
write out an entire block when we allocate a new block in order to
populate the free list indexes within the block's memory.  Additional
slowness came from having to move a block onto another dlist each time we
palloc or pfree a chunk from it.

Here we optimize both of those cases and do a little bit extra to improve
the performance of the slab allocator.

Here, instead of writing out the free list indexes when allocating a new
block, we introduce the concept of "unused" chunks.  When a block is first
allocated all chunks are unused.  These chunks only make it onto the
free list when they are pfree'd.  When allocating new chunks on an
existing block, we have the choice of consuming a chunk from the free list
or an unused chunk.  When both exist, we opt to use one from the free
list, as these have been used already and the memory of them is more
likely to be cached by the CPU.

Here we also reduce the number of block lists from there being one for
every possible value of free chunks on a block to just having a small
fixed number of block lists.  We keep the 0th block list for completely
full blocks and anything else stores blocks for some range of free chunks
with fuller blocks appearing on lower block list array elements.  This
reduces how often we must move a block to another list when we allocate or
free chunks, but still allows us to prefer to put new chunks on fuller
blocks and perhaps allow blocks with fewer chunks to be free'd later
once all their remaining chunks have been pfree'd.

Additionally, we now store a list of "emptyblocks", which are blocks that
no longer contain any allocated chunks.  We now keep up to 10 of these
around to avoid having to thrash malloc/free when allocation patterns
continually cause blocks to become free of any allocated chunks only to
allocate more chunks again.  Now only once we have 10 of these, we free
the block.  This does raise the high water mark for the total memory that
a slab context can consume.  It does not seem entirely unreasonable that
we might one day want to make this a property of SlabContext rather than a
compile-time constant.  Let's wait and see if there is any evidence to
support that this is required before doing it.

Author: Andres Freund, David Rowley
Tested-by: Tomas Vondra, John Naylor
Discussion: https://postgr.es/m/20210717194333.mr5io3zup3kxahfm@alap3.anarazel.de
2022-12-20 21:48:51 +13:00
John Naylor
995a9fb14f Move variable increment to the end of the loop
This is less error prone and matches the placement of other code
in the file.

Justin Pryzby

Reviewed by Tom Lane
Discussion: https://www.postgresql.org/message-id/20221123172436.GJ11463@telsasoft.com
2022-12-20 14:13:14 +07:00
Robert Haas
10ea0f924a Expose some information about backend subxact status.
A new function pg_stat_get_backend_subxact() can be used to get
information about the number of subtransactions in the cache of
a particular backend and whether that cache has overflowed. This
can be useful for tracking down performance problems that can
result from overflowed snapshots.

Dilip Kumar, reviewed by Zhihong Yu, Nikolay Samokhvalov,
Justin Pryzby, Nathan Bossart, Ashutosh Sharma, Julien
Rouhaud. Additional design comments from Andres Freund,
Tom Lane, Bruce Momjian, and David G. Johnston.

Discussion: http://postgr.es/m/CAFiTN-ut0uwkRJDQJeDPXpVyTWD46m3gt3JDToE02hTfONEN=Q@mail.gmail.com
2022-12-19 14:43:09 -05:00
Tom Lane
c4939f1215 Clean up dubious error handling in wellformed_xml().
This ancient bit of code was summarily trapping any ereport longjmp
whatsoever and assuming that it must represent an invalid-XML report.
It's not really appropriate to handle OOM-like situations that way:
maybe the input is valid or maybe not, but we couldn't find out.
And it'd be a seriously bad idea to ignore, say, a query cancel
error that way.  (Perhaps that can't happen because there is no
CHECK_FOR_INTERRUPTS anywhere within xml_parse, but even if that's
true today it's obviously a very fragile assumption.)

But in the wake of the previous commit, we can drop the PG_TRY
here altogether, and use the soft error mechanism to catch only
the kinds of errors that are legitimate to treat as invalid-XML.

(This is our first use of the soft error mechanism for something
not directly related to a datatype input function.  It won't be
the last.)

xml_is_document can be converted in the same way.  That one is
not actively broken, because it was checking specifically for
ERRCODE_INVALID_XML_DOCUMENT rather than trapping everything;
but the code is still shorter and probably faster this way.

Discussion: https://postgr.es/m/3564577.1671142683@sss.pgh.pa.us
2022-12-16 11:10:40 -05:00
Tom Lane
37bef842f5 Convert xml_in to report errors softly.
The key idea here is that xml_parse must distinguish hard errors
from soft errors.  We want to throw a hard error for libxml
initialization failures: those might be out-of-memory, or something
else, but in any case they are not the fault of the input string.
If we get to the point of parsing the input, and something goes
wrong, we can fairly consider that to mean bad input.

One thing that arguably does mean bad input, but I didn't trouble
to handle softly, is encoding conversion failure while converting
the server encoding to UTF8.  This might be something to improve
later, but it seems like a pretty low-probability scenario.

Discussion: https://postgr.es/m/3564577.1671142683@sss.pgh.pa.us
2022-12-16 11:10:40 -05:00
Tom Lane
d35a1af468 Convert range_in and multirange_in to report errors softly.
This is mostly straightforward, except that if the range type
has a canonical function, that might throw an error during range
input.  (Such errors probably only occur for edge cases: in the
in-core canonical functions, it happens only if a bound has the
maximum valid value for the underlying type.)  Hence, this patch
extends the soft-error regime to allow canonical functions to
return errors softly as well.  Extensions implementing range
canonical functions will need modification anyway because of the
API change for range_serialize(); while at it, they might want
to do something similar to what's been done here in the in-core
canonical functions.

Discussion: https://postgr.es/m/3284599.1671075185@sss.pgh.pa.us
2022-12-15 12:18:36 -05:00
Peter Eisentraut
75f49221c2 Static assertions cleanup
Because we added StaticAssertStmt() first before StaticAssertDecl(),
some uses as well as the instructions in c.h are now a bit backwards
from the "native" way static assertions are meant to be used in C.
This updates the guidance and moves some static assertions to better
places.

Specifically, since the addition of StaticAssertDecl(), we can put
static assertions at the file level.  This moves a number of static
assertions out of function bodies, where they might have been stuck
out of necessity, to perhaps better places at the file level or in
header files.

Also, when the static assertion appears in a position where a
declaration is allowed, then using StaticAssertDecl() is more native
than StaticAssertStmt().

Reviewed-by: John Naylor <john.naylor@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/941a04e7-dd6f-c0e4-8cdf-a33b3338cbda%40enterprisedb.com
2022-12-15 10:10:32 +01:00
Tom Lane
3b9d2deb67 Convert a few more datatype input functions to report errors softly.
Convert the remaining string-category input functions
(bpcharin, varcharin, byteain) to the new style.

Discussion: https://postgr.es/m/3038346.1671060258@sss.pgh.pa.us
2022-12-14 19:42:05 -05:00
Tom Lane
90161dad4d Convert a few more datatype input functions to report errors softly.
Convert cash_in and uuid_in to the new style.

Amul Sul, minor mods by me

Discussion: https://postgr.es/m/CAAJ_b97KeDWUdpTKGOaFYPv0OicjOu6EW+QYWj-Ywrgj_aEy1g@mail.gmail.com
2022-12-14 18:03:11 -05:00
Tom Lane
47f3f97fcd Convert a few more datatype input functions to report errors softly.
Convert assorted internal-ish datatypes, namely aclitemin,
int2vectorin, oidin, oidvectorin, pg_lsn_in, pg_snapshot_in,
and tidin to the new style.

(Some others you might expect to find in this group, such as
cidin and xidin, need no changes because they never throw
errors at all.  That seems a little cheesy ... but it is not in
the charter of this patch series to add new error conditions.)

Amul Sul, minor mods by me

Discussion: https://postgr.es/m/CAAJ_b97KeDWUdpTKGOaFYPv0OicjOu6EW+QYWj-Ywrgj_aEy1g@mail.gmail.com
2022-12-14 17:50:24 -05:00
Tom Lane
332741e739 Convert the geometric input functions to report errors softly.
Convert box_in, circle_in, line_in, lseg_in, path_in, point_in,
and poly_in to the new style.

line_in still throws hard errors for overflows/underflows that can occur
when the input is specified as two points rather than in the canonical
"Ax + By + C = 0" style.  I'm not too concerned about that: it won't be
reached in normal dump/restore cases, and it's fairly debatable that
such conversion should ever have been made part of a type input function
in the first place.  But in any case, I don't want to extend the soft
error conventions into float.h without more discussion than this patch
has had.

Amul Sul, minor mods by me

Discussion: https://postgr.es/m/CAAJ_b97KeDWUdpTKGOaFYPv0OicjOu6EW+QYWj-Ywrgj_aEy1g@mail.gmail.com
2022-12-14 16:10:20 -05:00
Tom Lane
17407a8eaa Convert a few more datatype input functions to report errors softly.
Convert bit_in, varbit_in, inet_in, cidr_in, macaddr_in, and
macaddr8_in to the new style.

Amul Sul, minor mods by me

Discussion: https://postgr.es/m/CAAJ_b97KeDWUdpTKGOaFYPv0OicjOu6EW+QYWj-Ywrgj_aEy1g@mail.gmail.com
2022-12-14 13:22:08 -05:00
Peter Eisentraut
b18c2decd7 Rearrange some static assertions for consistency
Put lengthof first.

Reported-by: Peter Smith <smithpb2250@gmail.com>
Discussion: https://www.postgresql.org/message-id/CAHut+PsUDMySVRuRc=h+P5N3+=TGvj4W_mi32XXg9dt4o-BXbA@mail.gmail.com
2022-12-14 16:08:13 +01:00
Peter Eisentraut
6fcda9aba8 Non-decimal integer literals
Add support for hexadecimal, octal, and binary integer literals:

    0x42F
    0o273
    0b100101

per SQL:202x draft.

This adds support in the lexer as well as in the integer type input
functions.

Reviewed-by: John Naylor <john.naylor@enterprisedb.com>
Reviewed-by: Zhihong Yu <zyu@yugabyte.com>
Reviewed-by: David Rowley <dgrowleyml@gmail.com>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/b239564c-cad0-b23e-c57e-166d883cb97d@enterprisedb.com
2022-12-14 06:17:07 +01:00
Jeff Davis
60684dd834 Add grantable MAINTAIN privilege and pg_maintain role.
Allows VACUUM, ANALYZE, REINDEX, REFRESH MATERIALIZED VIEW, CLUSTER,
and LOCK TABLE.

Effectively reverts 4441fc704d. Instead of creating separate
privileges for VACUUM, ANALYZE, and other maintenance commands, group
them together under a single MAINTAIN privilege.

Author: Nathan Bossart
Discussion: https://postgr.es/m/20221212210136.GA449764@nathanxps13
Discussion: https://postgr.es/m/45224.1670476523@sss.pgh.pa.us
2022-12-13 17:33:28 -08:00
Tom Lane
b0feda79fd Fix jsonb subscripting to cope with toasted subscript values.
jsonb_get_element() was incautious enough to use VARDATA() and
VARSIZE() directly on an arbitrary text Datum.  That of course
fails if the Datum is short-header, compressed, or out-of-line.
The typical result would be failing to match any element of a
jsonb object, though matching the wrong one seems possible as well.

setPathObject() was slightly brighter, in that it used VARDATA_ANY
and VARSIZE_ANY_EXHDR, but that only kept it out of trouble for
short-header Datums.  push_path() had the same issue.  This could
result in faulty subscripted insertions, though keys long enough to
cause a problem are likely rare in the wild.

Having seen these, I looked around for unsafe usages in the rest
of the adt/json* files.  There are a couple of places where it's not
immediately obvious that the Datum can't be compressed or out-of-line,
so I added pg_detoast_datum_packed() to cope if it is.  Also, remove
some other usages of VARDATA/VARSIZE on Datums we just extracted from
a text array.  Those aren't actively broken, but they will become so
if we ever start allowing short-header array elements, which does not
seem like a terribly unreasonable thing to do.  In any case they are
not great coding examples, and they could also do with comments
pointing out that we're assuming we don't need pg_detoast_datum_packed.

Per report from exe-dealer@yandex.ru.  Patch by me, but thanks to
David Johnston for initial investigation.  Back-patch to v14 where
jsonb subscripting was introduced.

Discussion: https://postgr.es/m/205321670615953@mail.yandex.ru
2022-12-12 16:17:54 -05:00
Tom Lane
b8c0ffbd2c Convert domain_in to report errors softly.
This is straightforward as far as it goes.  However, it does not
attempt to trap errors occurring during the execution of domain
CHECK constraints.  Since those are general user-defined
expressions, the only way to do that would involve starting up a
subtransaction for each check.  Of course the entire point of
the soft-errors feature is to not need subtransactions, so that
would be self-defeating.  For now, we'll rely on the assumption
that domain checks are written to avoid throwing errors.

Discussion: https://postgr.es/m/1181028.1670635727@sss.pgh.pa.us
2022-12-11 12:56:54 -05:00
Tom Lane
c60c9badba Convert json_in and jsonb_in to report errors softly.
This requires a bit of further infrastructure-extension to allow
trapping errors reported by numeric_in and pg_unicode_to_server,
but otherwise it's pretty straightforward.

In the case of jsonb_in, we are only capturing errors reported
during the initial "parse" phase.  The value-construction phase
(JsonbValueToJsonb) can also throw errors if assorted implementation
limits are exceeded.  We should improve that, but it seems like a
separable project.

Andrew Dunstan and Tom Lane

Discussion: https://postgr.es/m/3bac9841-fe07-713d-fa42-606c225567d6@dunslane.net
2022-12-11 11:28:15 -05:00
Tom Lane
50428a301d Change JsonSemAction to allow non-throw error reporting.
Formerly, semantic action functions for the JSON parser returned void,
so that there was no way for them to affect the parser's behavior.
That means in particular that they can't force an error exit except by
longjmp'ing.  That won't do in the context of our project to make input
functions return errors softly.  Hence, change them to return the same
JsonParseErrorType enum value as the parser itself uses.  If an action
function returns anything besides JSON_SUCCESS, the parse is abandoned
and that error code is returned.

Action functions can thus easily return the same error conditions that
the parser already knows about.  As an escape hatch for expansion, also
invent a code JSON_SEM_ACTION_FAILED that the core parser does not know
the exact meaning of.  When returning this code, an action function
must use some out-of-band mechanism for reporting the error details.

This commit simply makes the API change and causes all the existing
action functions to return JSON_SUCCESS, so that there is no actual
change in behavior here.  This is long enough and boring enough that
it seemed best to commit it separately from the changes that make
real use of the new mechanism.

In passing, remove a duplicate assignment of
transform_string_values_scalar.

Discussion: https://postgr.es/m/1436686.1670701118@sss.pgh.pa.us
2022-12-11 10:39:05 -05:00
Tom Lane
d02ef65bce Standardize error reports in unimplemented I/O functions.
We chose a specific wording of the not-implemented errors for
pseudotype I/O functions and other cases where there's little
value in implementing input and/or output.  gtsvectorin never
got that memo though, nor did most of contrib.  Make these all
fall in line, mostly because I'm a neatnik but also to remove
unnecessary translatable strings.

gbtreekey_in needs a bit of extra love since it supports
multiple SQL types.  Sadly, gbtreekey_out doesn't have the
ability to do that, but I think it's unreachable anyway.

Noted while surveying datatype input functions to see what we
have left to fix.
2022-12-10 18:26:43 -05:00
Tom Lane
e730718072 Use the macro, not handwritten code, to construct anymultirange_in().
Apparently anymultirange_in was written before we converted all
these pseudotype input functions to use a common macro, and it didn't
get fixed before committing.  Sloppy merging probably explains its
unintuitive ordering, too, so rearrange.

Noted while surveying datatype input functions to see what we
have left to fix.  I'm inclined to leave the pseudotypes as
throwing hard errors, because it's difficult to see a reason why
anyone would need something else.  But in any case, if we want
to change that, we shouldn't have to change multiple copies of
the code.
2022-12-10 17:22:16 -05:00
Michael Paquier
66dcb09246 Fix macro definitions in pgstatfuncs.c
Buildfarm member wrasse has been complaining about empty declarations
as an effect of 8018ffb and 83a1a1b due to extra semicolons.

While on it, remove also the last backslash of the macros definitions,
causing more lines to be eaten in it than necessary, per comment from
Tom Lane.

Reported-by: Tom Lane, and buildfarm member wrasse
Author: Nathan Bossart, Michael Paquier
Discussion: https://postgr.es/m/1188769.1670640236@sss.pgh.pa.us
2022-12-10 13:28:02 +09:00
Tom Lane
4dd687502d Restructure soft-error handling in formatting.c.
Replace the error trapping scheme introduced in 5bc450629 with our
shiny new errsave/ereturn mechanism.  This doesn't have any real
functional impact (although I think that the new coding is able
to report a few more errors softly than v15 did).  And I doubt
there's any measurable performance difference either.  But this
gets rid of an ad-hoc, one-of-a-kind design in favor of a mechanism
that will be widely used going forward, so it should be a net win
for code readability.

Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
2022-12-09 20:15:56 -05:00
Tom Lane
c60488b474 Convert datetime input functions to use "soft" error reporting.
This patch converts the input functions for date, time, timetz,
timestamp, timestamptz, and interval to the new soft-error style.
There's some related stuff in formatting.c that remains to be
cleaned up, but that seems like a separable project.

Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
2022-12-09 16:07:49 -05:00
Tom Lane
2661469d86 Allow DateTimeParseError to handle bad-timezone error messages.
Pay down some ancient technical debt (dating to commit 022fd9966):
fix a couple of places in datetime parsing that were throwing
ereport's immediately instead of returning a DTERR code that could be
interpreted by DateTimeParseError.  The reason for that was that there
was no mechanism for passing any auxiliary data (such as a zone name)
to DateTimeParseError, and these errors seemed to really need it.
Up to now it didn't matter that much just where the error got thrown,
but now we'd like to have a hard policy that datetime parse errors
get thrown from just the one place.

Hence, invent a "DateTimeErrorExtra" struct that can be used to
carry any extra values needed for specific DTERR codes.  Perhaps
in the future somebody will be motivated to use this to improve
the specificity of other DateTimeParseError messages, but for now
just deal with the timezone-error cases.

This is on the way to making the datetime input functions report
parse errors softly; but it's really an independent change, so
commit separately.

Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
2022-12-09 13:30:47 -05:00
Tom Lane
bad5116957 Const-ify a couple of datetime parsing subroutines.
More could be done in this line, but I just grabbed some low-hanging
fruit.  Principal objective was to remove the need for several ugly
unconstify() usages in formatting.c.
2022-12-09 10:43:45 -05:00
Tom Lane
ccff2d20ed Convert a few datatype input functions to use "soft" error reporting.
This patch converts the input functions for bool, int2, int4, int8,
float4, float8, numeric, and contrib/cube to the new soft-error style.
array_in and record_in are also converted.  There's lots more to do,
but this is enough to provide proof-of-concept that the soft-error
API is usable, as well as reference examples for how to convert
input functions.

This patch is mostly by me, but it owes very substantial debt to
earlier work by Nikita Glukhov, Andrew Dunstan, and Amul Sul.
Thanks to Andres Freund for review.

Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
2022-12-09 10:14:53 -05:00
Tom Lane
1939d26282 Add test scaffolding for soft error reporting from input functions.
pg_input_is_valid() returns boolean, while pg_input_error_message()
returns the primary error message if the input is bad, or NULL
if the input is OK.  The main reason for having two functions is
so that we can test both the details-wanted and the no-details-wanted
code paths.

Although these are primarily designed with testing in mind,
it could well be that they'll be useful to end users as well.

This patch is mostly by me, but it owes very substantial debt to
earlier work by Nikita Glukhov, Andrew Dunstan, and Amul Sul.
Thanks to Andres Freund for review.

Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
2022-12-09 10:08:44 -05:00
Tom Lane
d9f7f5d32f Create infrastructure for "soft" error reporting.
Postgres' standard mechanism for reporting errors (ereport() or elog())
is used for all sorts of error conditions.  This means that throwing
an exception via ereport(ERROR) requires an expensive transaction or
subtransaction abort and cleanup, since the exception catcher dare not
make many assumptions about what has gone wrong.  There are situations
where we would rather have a lighter-weight mechanism for dealing
with errors that are known to be safe to recover from without a full
transaction cleanup.  This commit creates infrastructure to let us
adapt existing error-reporting code for that purpose.  See the
included documentation changes for details.  Follow-on commits will
provide test code and usage examples.

The near-term plan is to convert most if not all datatype input
functions to report invalid input "softly".  This will enable
implementing some SQL/JSON features cleanly and without the cost
of subtransactions, and it will also allow creating COPY options
to deal with bad input without cancelling the whole COPY.

This patch is mostly by me, but it owes very substantial debt to
earlier work by Nikita Glukhov, Andrew Dunstan, and Amul Sul.
Thanks also to Andres Freund for review.

Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
2022-12-09 09:58:38 -05:00
Alexander Korotkov
096dd80f3c Add USER SET parameter values for pg_db_role_setting
The USER SET flag specifies that the variable should be set on behalf of an
ordinary role.  That lets ordinary roles set placeholder variables, which
permission requirements are not known yet.  Such a value wouldn't be used if
the variable finally appear to require superuser privileges.

The new flags are stored in the pg_db_role_setting.setuser array.  Catversion
is bumped.

This commit is inspired by the previous work by Steve Chavez.

Discussion: https://postgr.es/m/CAPpHfdsLd6E--epnGqXENqLP6dLwuNZrPMcNYb3wJ87WR7UBOQ%40mail.gmail.com
Author: Alexander Korotkov, Steve Chavez
Reviewed-by: Pavel Borisov, Steve Chavez
2022-12-09 13:12:20 +03:00
Peter Eisentraut
07c29ca7fe Remove unnecessary casts
Some code carefully cast all data buffer arguments for BufFileWrite()
and BufFileRead() to void *, even though the arguments are already
void * (and AFAICT were never anything else).  Remove this unnecessary
clutter.

Discussion: https://www.postgresql.org/message-id/flat/11dda853-bb5b-59ba-a746-e168b1ce4bdb%40enterprisedb.com
2022-12-08 08:58:15 +01:00
Tom Lane
8305629afe Minor code refactoring in elog.c (no functional change).
Combine some duplicated code stanzas by creating small functions.
Most of these duplications arose at a time when I wouldn't have
trusted C compilers to auto-inline small functions intelligently,
but they're probably poor practice now.  Similarly split out some
bits that aren't actually duplicative as the code stands, but would
become so after an upcoming patch to add another error-handling
code path.

Take the opportunity to add some lengthier comments about what
we're doing here, too.  Re-order one function that seemed not
very well-placed.

Patch by me, per suggestions from Andres Freund.

Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
2022-12-07 14:39:25 -05:00
Michael Paquier
8018ffbf58 Generate pg_stat_get*() functions for databases using macros
The same code pattern is repeated 21 times for int64 counters (0 for
missing entry) and 5 times for doubles (0 for missing entry) on database
entries.  This code is switched to use macros for the basic code
instead, shaving a few hundred lines of originally-duplicated code
patterns.  The function names remain the same, but some fields of
PgStat_StatDBEntry have to be renamed to cope with the new style.

This is in the same spirit as 83a1a1b.

Author: Michael Paquier
Reviewed-by: Nathan Bossart, Bertrand Drouvot
Discussion: https://postgr.es/m/Y46stlxQ2LQE20Na@paquier.xyz
2022-12-07 09:11:48 +09:00
Alvaro Herrera
a61b1f7482 Rework query relation permission checking
Currently, information about the permissions to be checked on relations
mentioned in a query is stored in their range table entries.  So the
executor must scan the entire range table looking for relations that
need to have permissions checked.  This can make the permission checking
part of the executor initialization needlessly expensive when many
inheritance children are present in the range range.  While the
permissions need not be checked on the individual child relations, the
executor still must visit every range table entry to filter them out.

This commit moves the permission checking information out of the range
table entries into a new plan node called RTEPermissionInfo.  Every
top-level (inheritance "root") RTE_RELATION entry in the range table
gets one and a list of those is maintained alongside the range table.
This new list is initialized by the parser when initializing the range
table.  The rewriter can add more entries to it as rules/views are
expanded.  Finally, the planner combines the lists of the individual
subqueries into one flat list that is passed to the executor for
checking.

To make it quick to find the RTEPermissionInfo entry belonging to a
given relation, RangeTblEntry gets a new Index field 'perminfoindex'
that stores the corresponding RTEPermissionInfo's index in the query's
list of the latter.

ExecutorCheckPerms_hook has gained another List * argument; the
signature is now:
typedef bool (*ExecutorCheckPerms_hook_type) (List *rangeTable,
					      List *rtePermInfos,
					      bool ereport_on_violation);
The first argument is no longer used by any in-core uses of the hook,
but we leave it in place because there may be other implementations that
do.  Implementations should likely scan the rtePermInfos list to
determine which operations to allow or deny.

Author: Amit Langote <amitlangote09@gmail.com>
Discussion: https://postgr.es/m/CA+HiwqGjJDmUhDSfv-U2qhKJjt9ST7Xh9JXC_irsAQ1TAUsJYg@mail.gmail.com
2022-12-06 16:09:24 +01:00
Michael Paquier
83a1a1b566 Generate pg_stat_get*() functions for tables using macros
The same code pattern is repeated 17 times for int64 counters (0 for
missing entry) and 5 times for timestamps (NULL for missing entry) on
table entries.  This code is switched to use a macro for the basic code
instead, shaving a few hundred lines of originally-duplicated code.  The
function names remain the same, but some fields of PgStat_StatTabEntry
have to be renamed to cope with the new style.

Author: Bertrand Drouvot
Reviewed-by: Nathan Bossart
Discussion: https:/postgr.es/m/20221204173207.GA2669116@nathanxps13
2022-12-06 10:46:35 +09:00
David Rowley
8692f6644e Fix thinko introduced in 6b423ec67
As pointed out by Dean Rasheed, we really should be using tmp >
-(PG_INTNN_MIN / 10) rather than tmp > (PG_INTNN_MAX / 10) for checking
for overflows in the accumulation in the pg_strtointNN functions.  This
does happen to be the same number when dividing by 10, but there is a
pending patch which adds other bases and this is not the same number if we
were to divide by 2 rather than 10, for example.  If the base 2 parsing
was to follow this example then we could accidentally think a string
containing the value of PG_INT32_MIN was an overflow in pg_strtoint32.
Clearly that shouldn't overflow.

This does not fix any actual live bugs, only some bad examples of overflow
checks for future bases.

Reported-by: Dean Rasheed
Discussion: https://postgr.es/m/CAEZATCVEtwfhdm-K-etZYFB0=qsR0nT6qXta_W+GQx4RYph1dg@mail.gmail.com
2022-12-05 11:55:05 +13:00
Tom Lane
92c4dafe1e Re-pgindent a few files.
Just because I'm a neatnik, and I'm currently working on
code in this area.  It annoys me to not be able to pgindent
my patches without working around unrelated changes.
2022-12-04 14:25:53 -05:00
David Rowley
6b423ec677 Improve performance of pg_strtointNN functions
Experiments have shown that modern versions of both gcc and clang are
unable to fully optimize the multiplication by 10 that we're doing in the
pg_strtointNN functions.  Both compilers seem to be making use of "imul",
which is not the most efficient way to multiply by 10.  This seems to be
due to the overflow checking that we're doing.  Without the overflow
checks, both those compilers switch to a more efficient method of
multiplying by 10.  In absence of overflow concern, integer multiplication
by 10 can be done by bit-shifting left 3 places to multiply by 8 and then
adding the original value twice.

To allow compilers this flexibility, here we adjust the code so that we
accumulate the number as an unsigned version of the type and remove the
use of pg_mul_sNN_overflow() and pg_sub_sNN_overflow().  The overflow
checking can be done simply by checking if the accumulated value has gone
beyond a 10th of the maximum *signed* value for the given type.  If it has
then the accumulation of the next digit will cause an overflow.  After
this is done, we do a final overflow check before converting the unsigned
version of the number back to its signed counterpart.

Testing has shown about an 8% speedup of a COPY into a table containing 2
INT columns.

Author: David Rowley, Dean Rasheed
Discussion: https://postgr.es/m/CAApHDvrL6_+wKgPqRHr7gH_6xy3hXM6a3QCsZ5ForurjDFfenA@mail.gmail.com
Discussion: https://postgr.es/m/CAApHDvrdYByjfj-=WbmVNFgmVZg88-dE7heukw8p55aJ+W=qxQ@mail.gmail.com
2022-12-04 16:18:18 +13:00
Tom Lane
29452de734 Doc: flesh out fmgr/README's description of context-node usage.
I wrote this to provide a home for a planned discussion of error
return conventions for non-error-throwing functions.  But it seems
useful as documentation of existing code no matter what becomes of
that proposal, so commit separately.
2022-12-03 10:50:39 -05:00
Andres Freund
cb2e7ddfe5 Prevent pgstats from getting confused when relkind of a relation changes
When the relkind of a relache entry changes, because a table is converted into
a view, pgstats can get confused in 15+, leading to crashes or assertion
failures.

For HEAD, Tom fixed this in b23cd185fd, by removing support for converting a
table to a view, removing the source of the inconsistency. This commit just
adds an assertion that a relcache entry's relkind does not change, just in
case we end up with another case of that in the future. As there's no cases of
changing relkind anymore, we can't add a test that that's handled correctly.

For 15, fix the problem by not maintaining the association with the old pgstat
entry when the relkind changes during a relcache invalidation processing. In
that case the pgstat entry needs to be unlinked first, to avoid
PgStat_TableStatus->relation getting out of sync. Also add a test reproducing
the issues.

No known problem exists in 11-14, so just add the test there.

Reported-by: vignesh C <vignesh21@gmail.com>
Author: Andres Freund <andres@anarazel.de>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CALDaNm2yXz+zOtv7y5zBd5WKT8O0Ld3YxikuU3dcyCvxF7gypA@mail.gmail.com
Discussion: https://postgr.es/m/CALDaNm3oZA-8Wbps2Jd1g5_Gjrr-x3YWrJPek-mF5Asrrvz2Dg@mail.gmail.com
Backpatch: 15-
2022-12-02 18:10:30 -08:00
Jeff Davis
7ac0f8d384 Fix broken hash function hashbpcharextended().
Ignore trailing spaces for non-deterministic collations when
hashing.

The previous behavior could lead to tuples falling into the wrong
partitions when hash partitioning is combined with the BPCHAR type and
a non-deterministic collation. Fortunately, it did not affect hash
indexes, because hash indexes do not use extended hash functions.

Decline to backpatch, per discussion.

Discussion: https://postgr.es/m/eb83d0ac7b299eb08f9b900dd08a5a0c5d90e517.camel@j-davis.com
Reviewed-by: Richard Guo, Tom Lane
2022-12-02 14:06:31 -08:00
Tom Lane
cabfb8241d Fix psql's \sf and \ef for new-style SQL functions.
Some options of these commands need to be able to identify the start
of the function body within the output of pg_get_functiondef().
It used to be that that always began with "AS", but since the
introduction of new-style SQL functions, it might also start with
"BEGIN" or "RETURN".  Fix that on the psql side, and add some
regression tests.

Noted by me awhile ago, but I didn't do anything about it.
Thanks to David Johnston for a nag.

Discussion: https://postgr.es/m/AM9PR01MB8268D5CDABDF044EE9F42173FE8C9@AM9PR01MB8268.eurprd01.prod.exchangelabs.com
2022-12-02 14:24:44 -05:00
Jeff Davis
edf12e7bbd Fix memory leak for hashing with nondeterministic collations.
Backpatch through 12, where nondeterministic collations were
introduced (5e1963fb76).

Backpatch-through: 12
2022-12-01 11:49:15 -08:00
Tom Lane
1dd6700f44 Fix under-parenthesized display of AT TIME ZONE constructs.
In commit 40c24bfef, I forgot to use get_rule_expr_paren() for the
arguments of AT TIME ZONE, resulting in possibly not printing parens
for expressions that need it.  But get_rule_expr_paren() wouldn't have
gotten it right anyway, because isSimpleNode() hadn't been taught that
COERCE_SQL_SYNTAX parent nodes don't guarantee sufficient parentheses.
Improve all that.  Also use this methodology for F_IS_NORMALIZED, so
that we don't print useless parens for that.

In passing, remove a comment that was obsoleted later.

Per report from Duncan Sands.  Back-patch to v14 where this code
came in.  (Before that, we didn't try to print AT TIME ZONE that way,
so there was no bug just ugliness.)

Discussion: https://postgr.es/m/f41566aa-a057-6628-4b7c-b48770ecb84a@deepbluecap.com
2022-12-01 11:38:14 -05:00
Alvaro Herrera
599b33b949 Stop accessing checkAsUser via RTE in some cases
A future commit will move the checkAsUser field from RangeTblEntry
to a new node that, unlike RTEs, will only be created for tables
mentioned in the query but not for the inheritance child relations
added to the query by the planner.  So, checkAsUser value for a
given child relation will have to be obtained by referring to that
for its ancestor mentioned in the query.

In preparation, it seems better to expand the use of RelOptInfo.userid
during planning in place of rte->checkAsUser so that there will be
fewer places to adjust for the above change.

Given that the child-to-ancestor mapping is not available during the
execution of a given "child" ForeignScan node, add a checkAsUser
field to ForeignScan to carry the child relation's RelOptInfo.userid.

Author: Amit Langote <amitlangote09@gmail.com>
Discussion: https://postgr.es/m/CA+HiwqGFCs2uq7VRKi7g+FFKbP6Ea_2_HkgZb2HPhUfaAKT3ng@mail.gmail.com
2022-11-30 12:07:03 +01:00
Thomas Munro
cd4329d939 Remove promote_trigger_file.
Previously, an idle startup (recovery) process would wake up every 5
seconds to have a chance to poll for promote_trigger_file, even if that
GUC was not configured.  That promotion triggering mechanism was
effectively superseded by pg_ctl promote and pg_promote() a long time
ago.  There probably aren't many users left and it's very easy to change
to the modern mechanisms, so we agreed to remove the feature.

This is part of a campaign to reduce wakeups on idle systems.

Author: Simon Riggs <simon.riggs@enterprisedb.com>
Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Ian Lawrence Barwick <barwick@gmail.com>
Discussion: https://postgr.es/m/CANbhV-FsjnzVOQGBpQ589%3DnWuL1Ex0Ykn74Nh1hEjp2usZSR5g%40mail.gmail.com
2022-11-29 12:08:38 +13:00
Andrew Dunstan
b5d6382496 Provide per-table permissions for vacuum and analyze.
Currently a table can only be vacuumed or analyzed by its owner or
a superuser. This can now be extended to any user by means of an
appropriate GRANT.

Nathan Bossart

Reviewed by: Bharath Rupireddy, Kyotaro Horiguchi, Stephen Frost, Robert
Haas, Mark Dilger, Tom Lane, Corey Huinker, David G. Johnston, Michael
Paquier.

Discussion: https://postgr.es/m/20220722203735.GB3996698@nathanxps13
2022-11-28 12:08:14 -05:00