The "emulation" I wrote for PQsendQuery in pipeline mode to use extended
query protocol, in commit acb7e4eb6b1c, is problematic. Due to numerous
bugs we'll soon remove it. As a first step and for all branches back to
14, stop using PQsendQuery in libpq_pipeline. Also remove a few test
lines that will no longer be relevant.
Backpatch to 14.
Discussion: https://postgr.es/m/CA+mi_8ZGSQNmW6-mk_iSR4JZB_LJ4ww3suOF+1vGNs3MrLsv4g@mail.gmail.com
We previously thought that allowing such cases can confuse users when they
specify DROP TABLES IN SCHEMA but that doesn't seem to be the case based
on discussion. This helps to uplift the restriction during
ALTER TABLE ... SET SCHEMA which used to ensure that we couldn't end up
with a publication having both a schema and the same schema's table.
To allow this, we need to forbid having any schema on a publication if
column lists on a table are specified (and vice versa). This is because
otherwise we still need a restriction during ALTER TABLE ... SET SCHEMA to
forbid cases where it could lead to a publication having both a schema and
the same schema's table with column list.
Based on suggestions by Peter Eisentraut.
Author: Hou Zhijie and Vignesh C
Reviewed-By: Peter Smith, Amit Kapila
Backpatch-through: 15, where it was introduced
Discussion: https://postgr.es/m/2729c9e2-9aac-8cda-f2f4-34f2bcc18f4e@enterprisedb.com
This may be a bit too subtle, but removing that word from there makes
this clause no longer a perfect parallel of the GRANT variant "ALL
TABLES IN SCHEMA": indeed, for publications what we record is the schema
itself, not the tables therein, which means that any tables added to the
schema in the future are also published. This is completely different
to what GRANT does, which is affect only the tables that exist when the
command is executed.
There isn't resounding support for this change, but there are a few
positive votes and no opposition. Because the time to 15 RC1 is very
short, let's get this out now.
Backpatch to 15.
Discussion: https://postgr.es/m/2729c9e2-9aac-8cda-f2f4-34f2bcc18f4e
The bounds hardcoded in compression.c since ffd5365 (minimum at 1 and
maximum at 22) do not match the reality of what zstd is able to
handle, these values being available via ZSTD_maxCLevel() and
ZSTD_minCLevel() at run-time. The maximum of 22 is actually correct
in recent versions, but the minimum was not as the library can go down
to -131720 by design. This commit changes the code to use the run-time
values in the code instead of some hardcoded ones.
Zstd seems to assume that these bounds could change in the future, and
Postgres will be able to adapt automatically to such changes thanks to
what's being done in this commit.
Reported-by: Justin Prysby
Discussion: https://postgr.es/m/20220922033716.GL31833@telsasoft.com
Backpatch-through: 15
If the ps display is not cleared at this point, the process could
continue displaying "recovering NNN" even if handling end-of-recovery
steps. df9274a has tackled that by providing some information with the
end-of-recovery checkpoint but 7ff23c6 has nullified the effect of the
first commit.
Per a suggestion from Justin, just clear the ps display when we are done
with recovery, so as no incorrect information is displayed. This may
get extended in the future, but for now restore the pre-7ff23c6
behavior.
Author: Justin Prysby
Discussion: https://postgr.es/m/20220913223954.GU31833@telsasoft.com
Backpatch-through: 15
Commit 7103ebb7aa added the tab-completion for MERGE accidentally
in the middle of that for LOCK TABLE. This commit fixes this issue.
This also adds some tab-completion for MERGE.
Back-patch to v15 where MERGE was introduced.
Author: Kotaro Kawamoto, Fujii Masao
Reviewed-by: Shinya Kato, Álvaro Herrera
Discussion: https://postgr.es/m/9f1ad2a87a58cd5e7d64f3993130958d@oss.nttdata.com
We check that the ICU locale is only specified if the ICU locale
provider is selected. But we did that too early. We need to wait
until we load the settings of the template database, since that could
also set what the locale provider is.
Reported-by: Marina Polyakova <m.polyakova@postgrespro.ru>
Discussion: https://www.postgresql.org/message-id/9ba4cd1ea6ed6b7b15c0ff15e6f540cd@postgrespro.ru
For publication schemas (OBJECT_PUBLICATION_NAMESPACE) and user
mappings (OBJECT_USER_MAPPING), pg_get_object_address() checked the
array length of the second argument, but not of the first argument.
If the first argument was too long, it would just silently ignore
everything but the first argument. Fix that by checking the length of
the first argument as well.
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/caaef70b-a874-1088-92ef-5ac38269c33b%40enterprisedb.com
clang 15+ will issue a set-but-not-used warning when the only
use of a variable is in autoincrements (e.g., "foo++;").
That's perfectly sensible, but it detects a few more cases that
we'd not noticed before. Silence the warnings with our usual
methods, such as PG_USED_FOR_ASSERTS_ONLY, or in one case by
actually removing a useless variable.
One thing that we can't nicely get rid of is that with %pure-parser,
Bison emits "yynerrs" as a local variable that falls foul of this
warning. To silence those, I inserted "(void) yynerrs;" in the
top-level productions of affected grammars.
Per recently-established project policy, this is a candidate
for back-patching into out-of-support branches: it suppresses
annoying compiler warnings but changes no behavior. Hence,
back-patch to 9.5, which is as far as these patches go without
issues. (A preliminary check shows that the prior branches
need some other set-but-not-used cleanups too, so I'll leave
them for another day.)
Discussion: https://postgr.es/m/514615.1663615243@sss.pgh.pa.us
The header comment for get_cheapest_group_keys_order() claimed that the
output arguments were set to a newly allocated list which may be freed by
the calling function, however, this was not always true as the function
would simply leave these arguments untouched in some cases.
This tripped me up when working on 1349d2790 as I mistakenly assumed I
could perform a list_concat with the output parameters. That turned out
bad due to list_concat modifying the original input lists.
In passing, make it more clear that the number of distinct values is
important to reduce tiebreaks during sorts. Also, explain what the
n_preordered parameter means.
Backpatch-through: 15, where get_cheapest_group_keys_order was introduced.
The API contract for planstate_tree_walker() callbacks is that they
take a PlanState pointer and a context pointer. Somebody figured
they could save a couple lines of code by ignoring that, and passing
ExecShutdownNode itself as the walker even though it has but one
argument. Somewhat remarkably, we've gotten away with that so far.
However, it seems clear that the upcoming C2x standard means to
forbid such cases, and compilers that actively break such code
likely won't be far behind. So spend the extra few lines of code
to do it honestly with a separate walker function.
In HEAD, we might as well go further and remove ExecShutdownNode's
useless return value. I left that as-is in back branches though,
to forestall complaints about ABI breakage.
Back-patch, with the thought that this might become of practical
importance before our stable branches are all out of service.
It doesn't seem to be fixing any live bug on any currently known
platform, however.
Discussion: https://postgr.es/m/208054.1663534665@sss.pgh.pa.us
The function has a bool argument named "case_insensitive", but that was
spelled "case_sensitive" in the declaration. Make them consistent now
to avoid confusion in the future.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Michael Paquiër <michael@paquier.xyz>
Discussion: https://postgr.es/m/CAH2-WznJt9CMM9KJTMjJh_zbL5hD9oX44qdJ4aqZtjFi-zA3Tg@mail.gmail.com
Backpatch: 10-
Frontend code shouldn't include postgres.h. Some files in src/port/ need to
include postgres.h/postgres_fe.h, but these files don't.
Discussion: https://postgr.es/m/20220915022626.5xx3ccgkzpkqw5mq@awork3.anarazel.de
Backpatch: 12-, where 3fd2a7932ef introduced (some) of these files
So far they were created below CacheMemoryContext. However, that's not
guaranteed to exist in all situations, leading to memory contexts created as
top-level contexts. There isn't actually a good reason anymore to create them
below CacheMemoryContext, so just creating them below TopMemoryContext seems
the best approach.
Reported-by: Reid Thompson <reid.thompson@crunchydata.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Author: "Drouvot, Bertrand" <bdrouvot@amazon.com>
Discussion: https://postgr.es/m/b948b729-42fe-f88c-2f4a-0e65d84c049b@amazon.com
Backpatch: 15-
Very occasionally the stats test failed due to the number of sessions not
being updated yet. Likely this requires that there is contention on the
database's stats entry. Solve this by forcing pending stats to be flushed
before fetching the stats.
I verified that there are no other test failures after making
pgstat_report_stat() only flush stats when force = true.
Per message from Tom Lane and buildfarm member crake.
Discussion: https://postgr.es/m/3428246.1663271992@sss.pgh.pa.us
Backpatch: 15-, where 5264add7847 added the test
Treat arguments declared as RECORD as if that were a polymorphic type
(which it is, sort of), in that we substitute the actual argument type
while forming the function cache lookup key. This allows the specific
composite type to be known in some cases where it was not before,
at the cost of making a separate function cache entry for each named
composite type that's passed to the function during a session. The
particular symptom discussed in bug #17610 could be solved in other
more-efficient ways, but only at the cost of considerable development
work, and there are other cases where we'd still fail without this.
Per bug #17610 from Martin Jurča. Back-patch to v11 where we first
allowed plpgsql functions to be declared as taking type RECORD.
Discussion: https://postgr.es/m/17610-fb1eef75bf6c2364@postgresql.org
If the createdb tests run under the C locale, the database cluster
will be initialized with encoding SQL_ASCII. With the checks added in
c7db01e325a530ec38ec7ba57cd3ed32e123e33c, this will cause several
ICU-related tests to fail because SQL_ASCII is not supported by ICU.
To work around that, use initdb option -E UTF8 for those tests to get
past that check.
Check in CREATE DATABASE and initdb that the selected encoding is
supported by ICU. Before, they would pass but users would later get
an error from the server when they tried to use the database.
Also document that initdb sets the encoding to UTF8 by default if the
ICU locale provider is chosen.
Author: Marina Polyakova <m.polyakova@postgrespro.ru>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://www.postgresql.org/message-id/6dd6db0984d86a51b7255ba79f111971@postgrespro.ru
I happened to notice that libpq_pipeline's private implementation
of pg_fatal lacked any pg_attribute_printf decoration. Indeed,
adding that turned up a mistake! We'd likely never have noticed
because the error exits in this code are unlikely to get hit,
but still, it's a bug.
We're so used to having the compiler check this stuff for us that
a printf-like function without pg_attribute_printf is a land mine.
I wonder if there is a way to detect such omissions.
Back-patch to v14 where this code came in.
After commit cc2c7d65fc27e877c9f407587b0b92d46cd6dd16 added this flag,
failure to reset it caused assertion failures. In non-assert builds, it
made the system fail to achieve the objectives listed in that commit;
chiefly, we might emit a spurious log message. Back-patch to v15, where
that commit first appeared.
Bharath Rupireddy and Kyotaro Horiguchi. Reviewed by Dilip Kumar,
Nathan Bossart and Michael Paquier. Reported by Dilip Kumar.
Discussion: https://postgr.es/m/CAFiTN-sE3ry=ycMPVtC+Djw4Fd7gbUGVv_qqw6qfzp=JLvqT3g@mail.gmail.com
In 29f45e299, we added support for optimizing the execution of NOT
IN(values) by using a hash table instead of a linear search over the
array. That commit neglected to update the header comment for
convert_saop_to_hashed_saop() to mention this fact. Here we fix that.
Author: James Coleman
Discussion: https://postgr.es/m/CAAaqYe99NUpAPcxgchGstgM23fmiGjqQPot8627YgkBgNt=BfA@mail.gmail.com
Backpatch-through: 15, where 29f45e299 was added.
This appears to be a merge mistake in 96ef3237bf74. We could put it
back the way it was before JSON_TABLE and it'd be two lines shorter, but
it's likely that JSON_TABLE will be back and will prefer things this
way. It makes no other difference in practice.
Backpatch to 15.
Reported by Ranier Vilela
Discussion: https://postgr.es/m/CAEudQAr4nOcNQskC4oBEZN4S+4heJ=1ch_ZKOxU+_Ef-FQSf-g@mail.gmail.com
The zlib documentation mentions the values supported for the compression
strategy, but this code has been using a hardcoded value of 0 rather
than Z_DEFAULT_STRATEGY. This commit adjusts the code to use
Z_DEFAULT_STRATEGY.
Backpatch down to where this code has been added to ease the backport of
any future patch touching this area.
Reported-by: Tom Lane
Discussion: https://postgr.es/m/1400032.1662217889@sss.pgh.pa.us
Backpatch-through: 10
This adds additional variants of palloc, pg_malloc, etc. that
encapsulate common usage patterns and provide more type safety.
Specifically, this adds palloc_object(), palloc_array(), and
repalloc_array(), which take the type name of the object to be
allocated as its first argument and cast the return as a pointer to
that type. There are also palloc0_object() and palloc0_array()
variants for initializing with zero, and pg_malloc_*() variants of all
of the above.
Inspired by the talloc library.
This is backpatched from master so that future backpatchable code can
make use of these APIs. This patch by itself does not contain any
users of these APIs.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/bb755632-2a43-d523-36f8-a1e7a389a907@enterprisedb.com
PG_COMPRESSION_OPTION_LEVEL is removed from the compression
specification logic, and instead the compression level is always
assigned with each library's default if nothing is directly given. This
centralizes the checks on the compression methods supported by a given
build, and always assigns a default compression level when parsing a
compression specification. This results in complaining at an earlier
stage than previously if a build supports a compression method or not,
aka when parsing a specification in the backend or the frontend, and not
when processing it. zstd, lz4 and zlib are able to handle in their
respective routines setting up the compression level the case of a
default value, hence the backend or frontend code (pg_receivewal or
pg_basebackup) has now no need to know what the default compression
level should be if nothing is specified: the logic is now done so as the
specification parsing assigns it. It can also be enforced by passing
down a "level" set to the default value, that the backend will accept
(the replication protocol is for example able to handle a command like
BASE_BACKUP (COMPRESSION_DETAIL 'gzip:level=-1')).
This code simplification fixes an issue with pg_basebackup --gzip
introduced by ffd5365, where the tarball of the streamed WAL segments
would be created as of pg_wal.tar.gz with uncompressed contents, while
the intention is to compress the segments with gzip at a default level.
The origin of the confusion comes from the handling of the default
compression level of gzip (-1 or Z_DEFAULT_COMPRESSION) and the value of
0 was getting assigned, which is what walmethods.c would consider
as equivalent to no compression when streaming WAL segments with its tar
methods. Assigning always the compression level removes the confusion
of some code paths considering a value of 0 set in a specification as
either no compression or a default compression level.
Note that 010_pg_basebackup.pl has to be adjusted to skip a few tests
where the shape of the compression detail string for client and
server-side compression was checked using gzip. This is a result of the
code simplification, as gzip specifications cannot be used if a build
does not support it.
Reported-by: Tom Lane
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/1400032.1662217889@sss.pgh.pa.us
Backpatch-through: 15
Locale options can be specified for initdb, createdb, and CREATE
DATABASE. In initdb, it has always been possible to specify --locale
and then some --lc-* option to override a category. CREATE DATABASE
and createdb didn't allow that, requiring either the all-categories
option or only per-category options. In
f2553d43060edb210b36c63187d52a632448e1d2, this was changed in CREATE
DATABASE (perhaps by accident?) to be more like the initdb behavior,
but createdb still had the old behavior.
Now we change createdb to match the behavior of CREATE DATABASE and
initdb, and also update the documentation of CREATE DATABASE to match
the new behavior, which was not done in the above commit.
Author: Marina Polyakova <m.polyakova@postgrespro.ru>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://www.postgresql.org/message-id/7c99c132dc9c0ac630e0127f032ac480@postgrespro.ru
This change concerns a couple of .txt files (for internal state checks)
that were still written in the path where the binary is executed, and
not in the subdirectory located in the target cluster. Like the other
.txt files doing already so (like loadable_libraries.txt), these are
saved in the base output directory. Note that on failure, the logs
report the full path to the .txt file generated, so these are easy to
find.
Oversight in 38bfae3.
Author: Daniel Gustafsson
Reviewed-by: Michael Paquier, Justin Prysby
Discussion: https://postgr.es/m/181A6DA8-3B7F-4B71-82D5-363FF0146820@yesql.se
Backpatch-through: 15
The primary fix here is to fix has_matching_range() so it does not
reference ranges->values[-1] when nranges == 0. Similar problems existed
in AssertCheckRanges() too. It does not look like any of these problems
could lead to a crash as the array in question is at the end of the Ranges
struct, and values[-1] is memory that belongs to other fields in the
struct. However, let's get rid of these rather unsafe coding practices.
In passing, I (David) adjusted some comments to try to make it more clear
what some of the fields are for in the Ranges struct. I had to study the
code to find out what nsorted was for as I couldn't tell from the
comments.
Author: Ranier Vilela
Discussion: https://postgr.es/m/CAEudQAqJQzPitufX-jR=YUbJafpCDAKUnwgdbX_MzSc93wuvdw@mail.gmail.com
Backpatch-through: 14, where multi-range brin was added.
Commit c4c340088 changed geometric operators to use float4 and float8
functions, and handle NaN's in a better way. The circle sameness test
had a typo in the code which resulted in all comparisons with the left
circle having a NaN radius considered same.
postgres=# select '<(0,0),NaN>'::circle ~= '<(0,0),1>'::circle;
?column?
----------
t
(1 row)
This fixes the sameness test to consider the radius of both the left
and right circle.
Backpatch to v12 where this was introduced.
Author: Ranier Vilela <ranier.vf@gmail.com>
Discussion: https://postgr.es/m/CAEudQAo8dK=yctg2ZzjJuzV4zgOPBxRU5+Kb+yatFiddtQk6Rw@mail.gmail.com
Backpatch-through: v12
The ECPG preprocessor converted code such as
static varchar str1[10], str2[20], str3[30];
into
static struct varchar_1 { int len; char arr[ 10 ]; } str1 ;
struct varchar_2 { int len; char arr[ 20 ]; } str2 ;
struct varchar_3 { int len; char arr[ 30 ]; } str3 ;
thus losing the storage attribute for the later variables.
Repeat the declaration for each such variable.
(Note that this occurred only for variables declared "varchar"
or "bytea", which may help explain how it escaped detection
for so long.)
Andrey Sokolov
Discussion: https://postgr.es/m/942241662288242@mail.yandex.ru
Because of inadequate filtering, the check triggers were confusing the
search for action triggers in GetForeignKeyActionTriggers and vice-versa
in GetForeignKeyCheckTriggers; this confusion results in seemingly
random assertion failures, and can have real impact in non-asserting
builds depending on catalog order. Change these functions so that they
correctly ignore triggers that are not relevant to each side.
To reduce the odds of further problems, do not break out of the
searching loop in assertion builds. This break is likely to hide bugs;
without it, we would have detected this bug immediately.
This problem was introduced by f4566345cf40, so backpatch to 15 where
that commit first appeared.
Author: Amit Langote <amitlangote09@gmail.com>
Discussion: https://postgr.es/m/20220908172029.sejft2ppckbo6oh5@awork3.anarazel.de
Discussion: https://postgr.es/m/4104619.1662663056@sss.pgh.pa.us
On failure in restoring a block image, no details were provided, while
it is possible to see failure with an inconsistent record state, a
failure in processing decompression or a failure in decompression
because a build does not support this option.
RestoreBlockImage() is used in two code paths in the backend code,
during recovery and when checking a page consistency after applying
masking, and both places are changed to consume the error message
produced by the internal routine when it returns a false status. All
the error messages are reported under ERRCODE_INTERNAL_ERROR, that gets
used also when attempting to access a page compressed by a method
not supported by the build attempting the decompression. This is
something that can happen in core when doing physical replication with
primary and standby using inconsistent build options, for example.
This routine is available since 2c03216d and it has never provided any
context about the error happening when it failed. This change is
justified even more after 57aa5b2, that introduced compression of FPWs
in WAL.
Reported-by: Justin Prysby
Author: Michael Paquier
Discussion: https://postgr.es/m/20220905002320.GD31833@telsasoft.com
Backpatch-through: 15
During ALTER TABLE ATTACH PARTITION, if the name of a parent's foreign
key constraint is already used on the partition, the code tries to
choose another one before the FK attributes list has been populated,
so the resulting constraint name was "<relname>__fkey" instead of
"<relname>_<attrs>_fkey". Repair, and add a test case.
Backpatch to 12. In 11, the code to attach a partition was not smart
enough to cope with conflicting constraint names, so the problem doesn't
exist there.
Author: Jehan-Guillaume de Rorthais <jgdr@dalibo.com>
Discussion: https://postgr.es/m/20220901184156.738ebee5@karst
We should process completed IOs *before* trying to start more, so that
it is always possible to decode one more record when the decoded record
queue is empty, even if maintenance_io_concurrency is set so low that a
single earlier WAL record might have saturated the IO queue.
That bug was hidden because the effect of maintenance_io_concurrency was
arbitrarily clamped to be at least 2. Fix the ordering, and also remove
that clamp. We need a special case for 0, which is now treated the same
as recovery_prefetch=off, but otherwise the number is used directly.
This allows for testing with 1, which would have made the problem
obvious in simple test scenarios.
Also add an explicit error message for missing contrecords. It was a
bit strange that we didn't report an error already, and became a latent
bug with prefetching, since the internal state that tracks aborted
contrecords would not survive retrying, as revealed by
026_overwrite_contrecord.pl with this adjustment. Reporting an error
prevents that.
Back-patch to 15.
Reported-by: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/20220831140128.GS31833%40telsasoft.com
The addition of published column names forgot to filter on attisdropped,
leading to cases where you could see "........pg.dropped.1........"
or the like as a reportedly-published column.
While we're here, rewrite the new subquery to get a more efficient plan
for it.
Hou Zhijie, per report from Jaime Casanova. Back-patch to v15 where
the bug was introduced. (Sadly, this means we need a post-beta4
catversion bump before beta4 has even hit the streets. I see no
good alternative though.)
Discussion: https://postgr.es/m/Yxa1SU4nH2HfN3/i@ahch-to