CREATE PUBLICATION has failed spuriously when applied to a permanent
relation created or rewritten in the current transaction. Make the same
change to another site having the same semantic intent; the second
instance has no user-visible consequences. Back-patch to v13, where
commit c6b92041d38512a4176ed76ad06f713d2e6c01a8 broke this.
Kyotaro Horiguchi
Discussion: https://postgr.es/m/20210113.160705.2225256954956139776.horikyota.ntt@gmail.com
In a cluster having used CREATE INDEX CONCURRENTLY while having enabled
prepared transactions, queries that use the resulting index can silently
fail to find rows. Fix this for future CREATE INDEX CONCURRENTLY by
making it wait for prepared transactions like it waits for ordinary
transactions. This expands the VirtualTransactionId structure domain to
admit prepared transactions. It may be necessary to reindex to recover
from past occurrences. Back-patch to 9.5 (all supported versions).
Andrey Borodin, reviewed (in earlier versions) by Tom Lane and Michael
Paquier.
Discussion: https://postgr.es/m/2E712143-97F7-4890-B470-4A35142ABC82@yandex-team.ru
When the .** jsonpath accessor handles the array, it selects both array and
each of its elements. When using lax mode, subsequent accessors automatically
unwrap arrays. So, the content of each array element may be selected twice.
Even though this behavior is counterintuitive, it's correct because everything
works as designed. This commit documents it.
Backpatch to 12 where the jsonpath language was introduced.
Reported-by: Thomas Kellerer
Bug: #16828
Discussion: https://postgr.es/m/16828-2b0229babfad2d8c%40postgresql.org
Discussion: https://postgr.es/m/CAPpHfdtS-nNidT%3DEqZbAYOPcnNOWh_sd6skVdu2CAQUGdvpT8Q%40mail.gmail.com
Author: Alexandex Korotkov, revised by Tom Lane
Reviewed-by: Alvaro Herrera, Thomas Kellerer, Tom Lane
Backpatch-through: 12
Per buildfarm and local experimentation, bleeding-edge gcc isn't
convinced that the MemSet in reorder_function_arguments() is safe.
Shut it up by adding an explicit check that pronargs isn't negative,
and by changing MemSet to memset. (It appears that either change is
enough to quiet the warning at -O2, but let's do both to be sure.)
In trying to protect the user from inconsistent behavior, commit
487e9861d0cf "Enable BEFORE row-level triggers for partitioned tables"
tried to prevent BEFORE UPDATE FOR EACH ROW triggers from moving the row
from one partition to another. However, it turns out that the
restriction is wrong in two ways: first, it fails spuriously, preventing
valid situations from working, as in bug #16794; and second, they don't
protect from any misbehavior, because tuple routing would cope anyway.
Fix by removing that restriction.
We keep the same restriction on BEFORE INSERT FOR EACH ROW triggers,
though. It is valid and useful there. In the future we could remove it
by having tuple reroute work for inserts as it does for updates.
Backpatch to 13.
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reported-by: Phillip Menke <pg@pmenke.de>
Discussion: https://postgr.es/m/16794-350a655580fbb9ae@postgresql.org
perform_pruning_combine_step() was not taught about the number of
partition indexes used in hash partitioning; more embarrassingly,
get_matching_hash_bounds() also had it wrong. These errors are masked
in the common case where all the partitions have the same modulus
and no partition is missing. However, with missing or unequal-size
partitions, we could erroneously prune some partitions that need
to be scanned, leading to silently wrong query answers.
While a minimal-footprint fix for this could be to export
get_partition_bound_num_indexes and make the incorrect functions use it,
I'm of the opinion that that function should never have existed in the
first place. It's not reasonable data structure design that
PartitionBoundInfoData lacks any explicit record of the length of
its indexes[] array. Perhaps that was all right when it could always
be assumed equal to ndatums, but something should have been done about
it as soon as that stopped being true. Putting in an explicit
"nindexes" field makes both partition_bounds_equal() and
partition_bounds_copy() simpler, safer, and faster than before,
and removes explicit knowledge of the number-of-partition-indexes
rules from some other places too.
This change also makes get_hash_partition_greatest_modulus obsolete.
I left that in place in case any external code uses it, but no core
code does anymore.
Per bug #16840 from Michał Albrycht. Back-patch to v11 where the
hash partitioning code came in. (In the back branches, add the new
field at the end of PartitionBoundInfoData to minimize ABI risks.)
Discussion: https://postgr.es/m/16840-571a22976f829ad4@postgresql.org
We had "short *mdy" in the extern declarations, but "short mdy[3]"
in the actual function definitions. Per C99 these are equivalent,
but recent versions of gcc have started to issue warnings about
the inconsistency. Clean it up before the warnings get any more
widespread.
Back-patch, in case anyone wants to build older PG versions with
bleeding-edge compilers.
Discussion: https://postgr.es/m/2401575.1611764534@sss.pgh.pa.us
doConnect() never returns connections in state CONNECTION_BAD, so
checking for that is pointless. Remove the code that does.
This code has been dead since ba708ea3dc84, 20 years ago.
Discussion: https://postgr.es/m/20210126195224.GA20361@alvherre.pgsql
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
When building aggregate expression steps, strict checks need a bailout
jump for when a null value is encountered, so there is a list of steps
that require later adjustment. Adding entries to that list for steps
that aren't actually strict would be harmless, except that there is an
Assert which catches them. This leads to spurious errors on asserts
builds, for data sets that trigger parallel aggregation of an
aggregate with a non-strict deserialization function (no such
aggregates exist in the core system).
Repair by not adding the adjustment entry when it's not needed.
Backpatch back to 11 where the code was introduced.
Per a report from Darafei (Komzpa) of the PostGIS project; analysis
and patch by me.
Discussion: https://postgr.es/m/87mty7peb3.fsf@news-spur.riddles.org.uk
Per a user question, spell out that UNNEST() returns array elements
in storage order; also provide an example to clarify the behavior for
multi-dimensional arrays.
While here, also clarify the SELECT reference page's description of
WITH ORDINALITY. These details were already given in 7.2.1.4, but
a reference page should not omit details.
Back-patch to v13; there's not room in the table in older versions.
Discussion: https://postgr.es/m/FF1FB31F-0507-4F18-9559-2DE6E07E3B43@gmail.com
The page about privilege rights mentioned that TRUNCATE could be applied
to views or even other relation types. This is confusing as this
command can be used only on tables and on partitioned tables.
Oversight in afc4a78.
Reported-by: Harisai Hari
Reviewed-by: Laurenz Albe
Discussion: https://postgr.es/m/161157636877.14625.15340884663716426087@wrigleys.postgresql.org
Backpatch-through: 12
When reporting connection errors, we might show a database name in the
message that's not the one we actually tried to connect to, if the
database was taken from libpq defaults instead of from user parameters.
Fix such error messages to use PQdb(), which reports the correct name.
(But, per commit 2930c05634bc, make sure not to try to print NULL.)
Apply to branches 9.5 through 13. Branch master has already been
changed differently by commit 58cd8dca3de0.
Reported-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/CA+TgmobssJ6rS22dspWnu-oDxXevGmhMD8VcRBjmj-b9UDqRjw@mail.gmail.com
The loops to identify word boundaries could access past the end of
the input string. Likely that would never result in an actual
crash, but it makes valgrind unhappy.
The logic to try different numbers of words didn't work when the
input has two words but we only have a match to the first, eg
"\h with select". (We must "continue" the pass loop, not "break".)
The logic to compute nl_count was bizarrely managed, and in at
least two code paths could end up calling PageOutput with
nl_count = 0, resulting in failing to paginate output that should
have been fed to the pager. Also, in v12 and up, the nl_count
calculation hadn't been updated to account for the addition of a URL.
The PQExpBuffer holding the command syntax details wasn't freed,
resulting in a session-lifespan memory leak.
While here, improve some comments, choose a more descriptive name
for a variable, fix inconsistent datatype choice for another variable.
Per bug #16837 from Alexander Lakhin. This code is very old,
so back-patch to all supported branches.
Kyotaro Horiguchi and Tom Lane
Discussion: https://postgr.es/m/16837-479bcd56040c71b3@postgresql.org
Both heapgettup() and heapgettup_pagemode() incorrectly set the first page
to scan in a backward scan in which the number of pages to scan was
specified by heap_setscanlimits(). The code incorrectly started the scan
at the end of the relation when startBlk was 0, or otherwise at
startBlk - 1, neither of which is correct when only scanning a subset of
pages.
The fix here checks if heap_setscanlimits() has changed the number of
pages to scan and if so we set the first page to scan as the final page in
the specified range during backward scans.
Proper adjustment of this code was forgotten when heap_setscanlimits() was
added in 7516f5259 back in 9.5. However, practice, nowhere in core code
performs backward scans after having used heap_setscanlimits(), yet, it is
possible an extension uses the heap functions in this way, hence
backpatch.
An upcoming patch does use heap_setscanlimits() with backward scans, so
this must be fixed before that can go in.
Author: David Rowley
Discussion: https://postgr.es/m/CAApHDvpGc9h0_oVD2CtgBcxCS1N-qDYZSeBRnUh+0CWJA9cMaA@mail.gmail.com
Backpatch-through: 9.5, all supported versions
DST law changes in Russia (Volgograd zone) and South Sudan.
Historical corrections for Australia, Bahamas, Belize, Bermuda,
Ghana, Israel, Kenya, Nigeria, Palestine, Seychelles, and Vanuatu.
Notably, the Australia/Currie zone has been corrected to the point
where it is identical to Australia/Hobart.
In light of recent discussions, we should instruct people to
install Apple's command line tools; installing Xcode is secondary.
Also, fix sample command for finding out the default sysroot,
as we now know that the command originally recommended can give
a result that doesn't match your OS version.
Also document the workaround to use if you really don't want
configure to select a sysroot at all.
Discussion: https://postgr.es/m/20210119111625.20435-1-james.hilliard1@gmail.com
This text claimed that the reconnection would occur "to the same
server", but there is no such guarantee in the code, nor would
insisting on that be an improvement.
Back-patch to v10 where multi-host connection strings were added.
Discussion: https://postgr.es/m/1095901.1611268376@sss.pgh.pa.us
Previously, pull_varnos() took the relids of a PlaceHolderVar as being
equal to the relids in its contents, but that fails to account for the
possibility that we have to postpone evaluation of the PHV due to outer
joins. This could result in a malformed plan. The known cases end up
triggering the "failed to assign all NestLoopParams to plan nodes"
sanity check in createplan.c, but other symptoms may be possible.
The right value to use is the join level we actually intend to evaluate
the PHV at. We can get that from the ph_eval_at field of the associated
PlaceHolderInfo. However, there are some places that call pull_varnos()
before the PlaceHolderInfos have been created; in that case, fall back
to the conservative assumption that the PHV will be evaluated at its
syntactic level. (In principle this might result in missing some legal
optimization, but I'm not aware of any cases where it's an issue in
practice.) Things are also a bit ticklish for calls occurring during
deconstruct_jointree(), but AFAICS the ph_eval_at fields should have
reached their final values by the time we need them.
The main problem in making this work is that pull_varnos() has no
way to get at the PlaceHolderInfos. We can fix that easily, if a
bit tediously, in HEAD by passing it the planner "root" pointer.
In the back branches that'd cause an unacceptable API/ABI break for
extensions, so leave the existing entry points alone and add new ones
with the additional parameter. (If an old entry point is called and
encounters a PHV, it'll fall back to using the syntactic level,
again possibly missing some valid optimization.)
Back-patch to v12. The computation is surely also wrong before that,
but it appears that we cannot reach a bad plan thanks to join order
restrictions imposed on the subquery that the PlaceHolderVar came from.
The error only became reachable when commit 4be058fe9 allowed trivial
subqueries to be collapsed out completely, eliminating their join order
restrictions.
Per report from Stephan Springl.
Discussion: https://postgr.es/m/171041.1610849523@sss.pgh.pa.us
It emerges that in some phases of the moon (perhaps to do with
directory entry order?), xcrun will report that the SDK path is
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk
which is normally a symlink to a version-numbered sibling directory.
Our heuristic to skip non-version-numbered pathnames was rejecting
that, which is the wrong thing to do. We'd still like to end up
with a version-numbered PG_SYSROOT value, but we can have that by
dereferencing the symlink.
Like the previous fix, back-patch to all supported versions.
Discussion: https://postgr.es/m/522433.1611089678@sss.pgh.pa.us
By default VACUUM will skip pages that it can't immediately get
exclusive access to, which means that even activities as harmless
and unpredictable as checkpoint buffer writes might prevent a page
from being processed. Ordinarily this is no big deal, but we have
a small number of test cases that examine the results of VACUUM's
processing and therefore will fail if the page of interest is skipped.
This seems to be the explanation for some rare buildfarm failures.
To fix, add the DISABLE_PAGE_SKIPPING option to the VACUUM commands
in tests where this could be an issue.
In passing, remove a duplicated query in pageinspect/sql/page.sql.
Back-patch as necessary (some of these cases are as old as v10).
Discussion: https://postgr.es/m/413923.1611006484@sss.pgh.pa.us
In commit 9eb5607e699, I got the condition on checking for split or
deleted page wrong: I used && instead of ||. The comment correctly said
"concurrent split _or_ deletion".
As a result, GiST insertion could miss a concurrent split, and insert to
wrong page. Duncan Sands demonstrated this with a test script that did a
lot of concurrent inserts.
Backpatch to v12, where this was introduced. REINDEX is required to fix
indexes that were affected by this bug.
Backpatch-through: 12
Reported-by: Duncan Sands
Discussion: https://www.postgresql.org/message-id/a9690483-6c6c-3c82-c8ba-dc1a40848f11%40deepbluecap.com
Specifying duplicated objects in this command would lead to unique
constraint violations in pg_default_acl or "tuple already updated by
self" errors. Similarly to GRANT/REVOKE, increment the command ID after
each subcommand processing to allow this case to work transparently.
A regression test is added by tweaking one of the existing queries of
privileges.sql to stress this case.
Reported-by: Andrus
Author: Michael Paquier
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/ae2a7dc1-9d71-8cba-3bb9-e4cb7eb1f44e@hot.ee
Backpatch-through: 9.5
Somebody extended search_plan_tree() to treat MergeAppend exactly
like Append, which is 100% wrong, because unlike Append we can't
assume that only one input node is actively returning tuples.
Hence a cursor using a MergeAppend across a UNION ALL or inheritance
tree could falsely match a WHERE CURRENT OF query at a row that
isn't actually the cursor's current output row, but coincidentally
has the same TID (in a different table) as the current output row.
Delete the faulty code; this means that such a case will now return
an error like 'cursor "foo" is not a simply updatable scan of table
"bar"', instead of silently misbehaving. Users should not find that
surprising though, as the same cursor query could have failed that way
already depending on the chosen plan. (It would fail like that if the
sort were done with an explicit Sort node instead of MergeAppend.)
Expand the clearly-inadequate commentary to be more explicit about
what this code is doing, in hopes of forestalling future mistakes.
It's been like this for awhile, so back-patch to all supported
branches.
Discussion: https://postgr.es/m/482865.1611075182@sss.pgh.pa.us
execCurrent.c's search_plan_tree() assumed that ForeignScanStates
and CustomScanStates necessarily have a valid ss_currentRelation.
This is demonstrably untrue for postgres_fdw's remote join and
remote aggregation plans, and non-leaf custom scans might not have
an identifiable scan relation either. Avoid crashing by ignoring
such nodes when the field is null.
This solution will lead to errors like 'cursor "foo" is not a
simply updatable scan of table "bar"' in cases where maybe we
could have allowed WHERE CURRENT OF to work. That's not an issue
for postgres_fdw's usages, since joins or aggregations would render
WHERE CURRENT OF invalid anyway. But an otherwise-transparent
upper level custom scan node might find this annoying. When and if
someone cares to expend work on such a scenario, we could invent a
custom-scan-provider callback to determine what's safe.
Report and patch by David Geier, commentary by me. It's been like
this for awhile, so back-patch to all supported branches.
Discussion: https://postgr.es/m/0253344d-9bdd-11c4-7f0d-d88c02cd7991@swarm64.com
The context is an object that no longer bears some aclitem that it bore
initially. (A user issued REVOKE or GRANT statements upon the object.)
pg_dump is forming SQL to reproduce the object ACL. Since initdb
creates no ACL bearing GRANT OPTION, reaching this bug requires an
extension where the creation script establishes such an ACL. No PGXN
extension does that. If an installation did reach the bug, pg_dump
would have omitted a semicolon, causing a REVOKE and the next SQL
statement to fail. Separately, since the affected code exists to
eliminate an entire aclitem, it wants plain REVOKE, not REVOKE GRANT
OPTION FOR. Back-patch to 9.6, where commit
23f34fa4ba358671adab16773e79c17c92cbc870 first appeared.
Discussion: https://postgr.es/m/20210109102423.GA160022@rfd.leadboat.com
Every core SLRU wraps around. With the exception of pg_notify, the wrap
point can fall in the middle of a page. Account for this in the
PagePrecedes callback specification and in SimpleLruTruncate()'s use of
said callback. Update each callback implementation to fit the new
specification. This changes SerialPagePrecedesLogically() from the
style of asyncQueuePagePrecedes() to the style of CLOGPagePrecedes().
(Whereas pg_clog and pg_serial share a key space, pg_serial is nothing
like pg_notify.) The bug fixed here has the same symptoms and user
followup steps as 592a589a04bd456410b853d86bd05faa9432cbbb. Back-patch
to 9.5 (all supported versions).
Reviewed by Andrey Borodin and (in earlier versions) by Tom Lane.
Discussion: https://postgr.es/m/20190202083822.GC32531@gust.leadboat.com
Add a check that CREATE STATISTICS does not add extended statistics on
system catalogs, similarly to indexes etc. It can be overriden using
the allow_system_table_mods GUC.
This bug exists since 7b504eb282c, adding the extended statistics, so
backpatch all the way back to PostgreSQL 10.
Author: Tomas Vondra
Reported-by: Dean Rasheed
Backpatch-through: 10
Discussion: https://postgr.es/m/CAEZATCXAPrrOKwEsyZKQ4uzzJQWBCt6QAvOcgqRGdWwT1zb%2BrQ%40mail.gmail.com
In cases where Xcode is newer than the underlying macOS version,
asking xcodebuild for the SDK path will produce a pointer to the
SDK shipped with Xcode, which may end up building code that does
not work on the underlying macOS version. It appears that in
such cases, xcodebuild's answer also fails to match the default
behavior of Apple's compiler: assuming one has installed Xcode's
"command line tools", there will be an SDK for the OS's own version
in /Library/Developer/CommandLineTools, and the compiler will
default to using that. This is all pretty poorly documented,
but experimentation suggests that "xcrun --show-sdk-path" gives
the sysroot path that the compiler is actually using, at least
in some cases. Hence, try that first, but revert to xcodebuild
if xcrun fails (in very old Xcode, it is missing or lacks the
--show-sdk-path switch).
Also, "xcrun --show-sdk-path" may give a path that is valid but lacks
any OS version identifier. We don't really want that, since most
of the motivation for wiring -isysroot into the build flags at all
is to ensure that all parts of a PG installation are built against
the same SDK, even when considering extensions built later and/or on
a different machine. Insist on finding "N.N" in the directory name
before accepting the result. (Adding "--sdk macosx" to the xcrun
call seems to produce the same answer as xcodebuild, but usually
more quickly because it's cached, so we also try that as a fallback.)
The core reason why we don't want to use Xcode's default SDK in cases
like this is that Apple's technology for introducing new syscalls
does not play nice with Autoconf: for example, configure will think
that preadv/pwritev exist when using a Big Sur SDK, even when building
on an older macOS version where they don't exist. It'd be nice to
have a better solution to that problem, but this patch doesn't attempt
to fix that.
Per report from Sergey Shinderuk. Back-patch to all supported versions.
Discussion: https://postgr.es/m/ed3b8e5d-0da8-6ebd-fd1c-e0ac80a4b204@postgrespro.ru
This is the same fix as commit 9eabfe300 applied to INDEX ATTACH
entries, but for table-to-publication attachments. As in that
case, even though the backend doesn't record "ownership" of the
attachment, we still ought to label it in the dump archive with
the role name that should run the ALTER PUBLICATION command.
The existing behavior causes the ALTER to be done by the original
role that started the restore; that will usually work fine, but
there may be corner cases where it fails.
The bulk of the patch is concerned with changing struct
PublicationRelInfo to include a pointer to the associated
PublicationInfo object, so that we can get the owner's name
out of that when the time comes. While at it, I rewrote
getPublicationTables() to do just one query of pg_publication_rel,
not one per table.
Back-patch to v10 where this code was introduced.
Discussion: https://postgr.es/m/1165710.1610473242@sss.pgh.pa.us
When a tablespace is used in a partitioned relation (per commits
ca4103025dfe in pg12 for tables and 33e6c34c3267 in pg11 for indexes),
it is possible to drop the tablespace, potentially causing various
problems. One such was reported in bug #16577, where a rewriting ALTER
TABLE causes a server crash.
Protect against this by using pg_shdepend to keep track of tablespaces
when used for relations that don't keep physical files; we now abort a
tablespace if we see that the tablespace is referenced from any
partitioned relations.
Backpatch this to 11, where this problem has been latent all along. We
don't try to create pg_shdepend entries for existing partitioned
indexes/tables, but any ones that are modified going forward will be
protected.
Note slight behavior change: when trying to drop a tablespace that
contains both regular tables as well as partitioned ones, you'd
previously get ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE and now you'll
get ERRCODE_DEPENDENT_OBJECTS_STILL_EXIST. Arguably, the latter is more
correct.
It is possible to add protecting pg_shdepend entries for existing
tables/indexes, by doing
ALTER TABLE ONLY some_partitioned_table SET TABLESPACE pg_default;
ALTER TABLE ONLY some_partitioned_table SET TABLESPACE original_tablespace;
for each partitioned table/index that is not in the database default
tablespace. Because these partitioned objects do not have storage, no
file needs to be actually moved, so it shouldn't take more time than
what's required to acquire locks.
This query can be used to search for such relations:
SELECT ... FROM pg_class WHERE relkind IN ('p', 'I') AND reltablespace <> 0
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/16577-881633a9f9894fd5@postgresql.org
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Commit fef5b47f6b added the regression test to check whether a standby is
able to follow a primary on a newer timeline when WAL archiving is enabled.
But the buildfarm member florican reported that this test failed because
the requested WAL segment was removed and replication failed. This is a
timing issue. Since neither replication slot is used nor wal_keep_size is set
in the test, checkpoint could remove the WAL segment that's still necessary
for replication.
This commit stabilizes the test by setting wal_keep_size.
Back-patch to v13 where the regression test that this commit stabilizes
was added.
Author: Fujii Masao
Discussion: https://postgr.es/m/X//PsenxcC50jDzX@paquier.xyz
Commit 709d003fbd refactored WAL-reading code, but accidentally caused
WalSndSegmentOpen() to fail to follow a timeline switch while reading from
a historic timeline. This issue caused a standby to fail to follow a primary
on a newer timeline when WAL archiving is enabled.
If there is a timeline switch within the segment, WalSndSegmentOpen() should
read from the WAL segment belonging to the new timeline. But previously
since it failed to follow a timeline switch, it tried to read the WAL segment
with old timeline. When WAL archiving is enabled, that WAL segment with
old timeline doesn't exist because it's renamed to .partial. This leads
a primary to have tried to read non-existent WAL segment, and which caused
replication to faill with the error "ERROR: requested WAL segment ... has
already been removed".
This commit fixes WalSndSegmentOpen() so that it's able to follow a timeline
switch, to ensure that a standby is able to follow a primary on a newer
timeline even when WAL archiving is enabled.
This commit also adds the regression test to check whether a standby is
able to follow a primary on a newer timeline when WAL archiving is enabled.
Back-patch to v13 where the bug was introduced.
Reported-by: Kyotaro Horiguchi
Author: Kyotaro Horiguchi, tweaked by Fujii Masao
Reviewed-by: Alvaro Herrera, Fujii Masao
Discussion: https://postgr.es/m/20201209.174314.282492377848029776.horikyota.ntt@gmail.com
The point of this restriction is to avoid trying to substitute variables
into timestamp literal values, which may contain strings like '12:34'.
There is a good deal more that should be done to reduce pgbench's
tendency to substitute where it shouldn't. But this is sufficient to
solve the case complained of by Jaime Soler, and it's simple enough
to back-patch.
Back-patch to v11; before commit 9d36a3866, pgbench had a slightly
different definition of what a variable name is, and anyway it seems
unwise to change long-stable branches for this.
Fabien Coelho
Discussion: https://postgr.es/m/alpine.DEB.2.22.394.2006291740420.805678@pseudo
Although a partitioned index's attachment to its parent doesn't
have separate ownership, the ArchiveEntry for it needs to be
marked with an owner anyway, to ensure that the ALTER command
is run by the appropriate role when restoring with
--use-set-session-authorization. Without this, the ALTER will
be run by the role that started the restore session, which will
usually work but it's formally the wrong thing.
Back-patch to v11 where this type of ArchiveEntry was added.
In HEAD, add equivalent commentary to the just-added TABLE ATTACH
case, which I'd made do the right thing already.
Discussion: https://postgr.es/m/1094034.1610418498@sss.pgh.pa.us
brenext(), when parsing a '*' quantifier, forgot to return any "value"
for the token; per the equivalent case in next(), it should return
value 1 to indicate that greedy rather than non-greedy behavior is
wanted. The result is that the compiled regexp could behave like 'x*?'
rather than the intended 'x*', if we were unlucky enough to have
a zero in v->nextvalue at this point. That seems to happen with some
reliability if we have '.*' at the beginning of a BRE-mode regexp,
although that depends on the initial contents of a stack-allocated
struct, so it's not guaranteed to fail.
Found by Alexander Lakhin using valgrind testing. This bug seems
to be aboriginal in Spencer's code, so back-patch all the way.
Discussion: https://postgr.es/m/16814-6c5e3edd2bdf0d50@postgresql.org
We found last February that the error-case tests added by commit
008cf0409 failed on OpenBSD, because that platform doesn't really
check locale names. At the time it seemed that that was only an issue
for LC_CTYPE, but testing on a more recent version of OpenBSD shows
that it's now equally lax about LC_COLLATE.
Rather than dropping the LC_COLLATE test too, put back LC_CTYPE
(reverting c4b0edb07), and adjust these tests to accept the different
error message that we get if setlocale() doesn't reject a bogus locale
name. The point of these tests is not really what the backend does
with the locale name, but to show that createdb quotes funny locale
names safely; so we're not losing test reliability this way.
Back-patch as appropriate.
Discussion: https://postgr.es/m/231373.1610058324@sss.pgh.pa.us