It's possible that a subplan below a Memoize node contains a parameter
from above the Memoize node. If this parameter changes then cache entries
may become out-dated due to the new parameter value.
Previously Memoize was mistakenly not aware of this. We fix this here by
flushing the cache whenever a parameter that's not part of the cache
key changes.
Bug: #17213
Reported by: Elvis Pranskevichus
Author: David Rowley
Discussion: https://postgr.es/m/17213-988ed34b225a2862@postgresql.org
Backpatch-through: 14, where Memoize was added
Memoize would always use the hash equality operator for the cache key
types to determine if the current set of parameters were the same as some
previously cached set. Certain types such as floating points where -0.0
and +0.0 differ in their binary representation but are classed as equal by
the hash equality operator may cause problems as unless the join uses the
same operator it's possible that whichever join operator is being used
would be able to distinguish the two values. In which case we may
accidentally return in the incorrect rows out of the cache.
To fix this here we add a binary mode to Memoize to allow it to the
current set of parameters to previously cached values by comparing
bit-by-bit rather than logically using the hash equality operator. This
binary mode is always used for LATERAL joins and it's used for normal
joins when any of the join operators are not hashable.
Reported-by: Tom Lane
Author: David Rowley
Discussion: https://postgr.es/m/3004308.1632952496@sss.pgh.pa.us
Backpatch-through: 14, where Memoize was added
Commit ac9099fc1 rearranged the logic in spgGetCache() that determines
the index's attType (nominal input data type) and leafType (actual
type stored in leaf index tuples). Turns out this broke things for
the case where (a) the actual input data type is different from the
nominal type, (b) the opclass's config function leaves leafType
defaulted, and (c) the opclass has no "compress" function. (b) caused
us to assign the actual input data type as leafType, and then since
that's not attType, we complained that a "compress" function is
required. For non-polymorphic opclasses, condition (a) arises in
binary-compatible cases, such as using SP-GiST text_ops for a varchar
column, or using any opclass on a domain over its nominal input type.
To fix, use attType for leafType when the index's declared column type
is different from but binary-compatible with attType. Do this only in
the defaulted-leafType case, to avoid overriding any explicit
selection made by the opclass.
Per bug #17294 from Ilya Anfimov. Back-patch to v14.
Discussion: https://postgr.es/m/17294-8f6c7962ce877edc@postgresql.org
This commit adds to the main regression test suite a table with all
the in-core data types (some exceptions apply). This table is not
dropped, so as pg_upgrade would be able to check the binary
compatibility of the types tracked in the table. If a new type is added
in core, this part of the tests would need a refresh but the tests are
designed to fail if that were to happen.
As this is useful for upgrades and that these rely on the objects
created in the regression test suite of the old version upgraded from,
a backpatch down to 12 is done, which is the last point where a binary
incompatible change has been done (7c15cef). This will hopefully be
enough to find out if something gets broken during the development of a
new version of Postgres, so as it is possible to take actions in
pg_upgrade itself in this case (like 0ccfc28 for sql_identifier).
An area that is not covered yet is related to external modules, which
may create their own types. The testing infrastructure of pg_upgrade is
not integrated yet with the external modules stored in core
(src/test/modules/ or contrib/, all use the same database name for their
tests so there would be an overlap). This could be improved in the
future.
Author: Justin Pryzby
Reviewed-by: Jacob Champion, Peter Eisentraut, Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/20201206180248.GI24052@telsasoft.com
Backpatch-through: 12
If a SQL-standard function body contains an INSERT ... SELECT statement,
any function parameters referenced within the SELECT were always printed
in $N style, rather than using the parameter name if any. While not
strictly incorrect, this wasn't the intention, and it's inconsistent
with the way that such parameters would be printed in any other kind
of statement.
The cause is that the recursion to get_query_def from
get_insert_query_def neglected to pass down the context->namespaces
list, passing constant NIL instead. This is a very ancient oversight,
but AFAICT it had no visible consequences before commit e717a9a18
added an outermost namespace with function parameters. We don't allow
INSERT ... SELECT as a sub-query, except in a top-level WITH clause,
where it couldn't contain any outer references that might need to access
upper namespaces. So although that's arguably a bug, I don't see any
point in changing it before v14.
In passing, harden the code added to get_parameter by e717a9a18 so that
it won't crash if a PARAM_EXTERN Param appears in an unexpected place.
Per report from Erki Eessaar. Code fix by me, regression test case
by Masahiko Sawada.
Discussion: https://postgr.es/m/AM9PR01MB8268347BED344848555167FAFE949@AM9PR01MB8268.eurprd01.prod.exchangelabs.com
When changing REPLICA IDENTITY INDEX to another one, the target table's
relcache was not being invalidated. This leads to skipping update/delete
operations during apply on the subscriber side as the columns required to
search corresponding rows won't get logged.
Author: Tang Haiying, Hou Zhijie
Reviewed-by: Euler Taveira, Amit Kapila
Backpatch-through: 10
Discussion: https://postgr.es/m/OS0PR01MB61133CA11630DAE45BC6AD95FB939@OS0PR01MB6113.jpnprd01.prod.outlook.com
pg_stat_get_slru() in pgstatfuncs.c would point to one element after the
end of the array PgStat_SLRUStats when finishing to scan its entries.
This had no direct consequences as no data from the extra memory area
was read, but static analyzers would rightfully complain here. So let's
be clean.
While on it, this adds one regression test in the area reserved for
system views.
Reported-by: Alexander Kozhemyakin, via AddressSanitizer
Author: Kyotaro Horiguchi
Discussion: https://postgr.es/m/17280-37da556e86032070@postgresql.org
Backpatch-through: 13
PostgreSQL 13 and newer versions are directly impacted by that through
the SQL function normalize(), which would cause a call of this function
to write one byte past its allocation if using in input an empty
string after recomposing the string with NFC and NFKC. Older versions
(v10~v12) are not directly affected by this problem as the only code
path using normalization is SASLprep in SCRAM authentication that
forbids the case of an empty string, but let's make the code more robust
anyway there so as any out-of-core callers of this function are covered.
The solution chosen to fix this issue is simple, with the addition of a
fast-exit path if the decomposed string is found as empty. This would
only happen for an empty string as at its lowest level a codepoint would
be decomposed as itself if it has no entry in the decomposition table or
if it has a decomposition size of 0.
Some tests are added to cover this issue in v13~. Note that an empty
string has always been considered as normalized (grammar "IS NF[K]{C,D}
NORMALIZED", through the SQL function is_normalized()) for all the
operations allowed (NFC, NFD, NFKC and NFKD) since this feature has been
introduced as of 2991ac5. This behavior is unchanged but some tests are
added in v13~ to check after that.
I have also checked "make normalization-check" in src/common/unicode/,
while on it (works in 13~, and breaks in older stable branches
independently of this commit).
The release notes should just mention this commit for v13~.
Reported-by: Matthijs van der Vleuten
Discussion: https://postgr.es/m/17277-0c527a373794e802@postgresql.org
Backpatch-through: 10
We've seen intermittent failures in this test on slower buildfarm
machines, which I think can be explained by assuming that autovacuum
emitted some additional WAL. Disable autovacuum to stabilize it.
In passing, use stringwise not numeric comparison to compare
WAL file names. Doesn't matter at present, but they are
hex strings not decimal ...
Discussion: https://postgr.es/m/1372189.1636499287@sss.pgh.pa.us
When calculating distance between float4/float8 values, we need to be a
bit more careful about NaN values in order not to trigger assert. We
consider NaN values to be equal (distace 0.0) and in infinite distance
from all other values.
On builds without asserts, this issue is mostly harmless - the ranges
may be merged in less efficient order, but the index is still correct.
Per report from Andreas Seltenreich. Backpatch to 14, where this new
BRIN opclass was introduced.
Reported-by: Andreas Seltenreich
Discussion: https://postgr.es/m/87r1bw9ukm.fsf@credativ.de
If lo_export() fails to open the target file or to write to it, it leaks
the created LargeObjectDesc and its snapshot in the top-transaction
context and resource owner. That's pretty harmless, it's a small leak
after all, but it gives the user a "Snapshot reference leak" warning.
Fix by using a short-lived memory context and no resource owner for
transient LargeObjectDescs that are opened and closed within one function
call. The leak is easiest to reproduce with lo_export() on a directory
that doesn't exist, but in principle the other lo_* functions could also
fail.
Backpatch to all supported versions.
Reported-by: Andrew B
Reviewed-by: Alvaro Herrera
Discussion: https://www.postgresql.org/message-id/32bf767a-2d65-71c4-f170-122f416bab7e@iki.fi
Commit b4af70cb, which simplified state managed by VACUUM, performed
refactoring of parallel VACUUM in passing. Confusion about the exact
details of the tasks that the leader process is responsible for led to
code that made it possible for parallel VACUUM to miss a subset of the
table's indexes entirely. Specifically, indexes that fell under the
min_parallel_index_scan_size size cutoff were missed. These indexes are
supposed to be vacuumed by the leader (alongside any parallel unsafe
indexes), but weren't vacuumed at all. Affected indexes could easily
end up with duplicate heap TIDs, once heap TIDs were recycled for new
heap tuples. This had generic symptoms that might be seen with almost
any index corruption involving structural inconsistencies between an
index and its table.
To fix, make sure that the parallel VACUUM leader process performs any
required index vacuuming for indexes that happen to be below the size
cutoff. Also document the design of parallel VACUUM with these
below-size-cutoff indexes.
It's unclear how many users might be affected by this bug. There had to
be at least three indexes on the table to hit the bug: a smaller index,
plus at least two additional indexes that themselves exceed the size
cutoff. Cases with just one additional index would not run into
trouble, since the parallel VACUUM cost model requires two
larger-than-cutoff indexes on the table to apply any parallel
processing. Note also that autovacuum was not affected, since it never
uses parallel processing.
Test case based on tests from a larger patch to test parallel VACUUM by
Masahiko Sawada.
Many thanks to Kamigishi Rei for her invaluable help with tracking this
problem down.
Author: Peter Geoghegan <pg@bowt.ie>
Author: Masahiko Sawada <sawada.mshk@gmail.com>
Reported-By: Kamigishi Rei <iijima.yun@koumakan.jp>
Reported-By: Andrew Gierth <andrew@tao11.riddles.org.uk>
Diagnosed-By: Andres Freund <andres@anarazel.de>
Bug: #17245
Discussion: https://postgr.es/m/17245-ddf06aaf85735f36@postgresql.org
Discussion: https://postgr.es/m/20211030023740.qbnsl2xaoh2grq3d@alap3.anarazel.de
Backpatch: 14-, where the refactoring commit appears.
The opclass parameter Datums from the old index are fetched in the same
way as for predicates and expressions, by grabbing them directly from
the system catalogs. They are then copied into the new IndexInfo that
will be used for the creation of the new copy.
This caused the new index to be rebuilt with default parameters rather
than the ones pre-defined by a user. The only way to get back a new
index with correct opclass parameters would be to recreate a new index
from scratch.
The issue has been introduced by 911e702.
Author: Michael Paquier
Reviewed-by: Zhihong Yu
Discussion: https://postgr.es/m/YX0CG/QpLXcPr8HJ@paquier.xyz
Backpatch-through: 13
Commits fdd965d07 and 3cd9c3b92 tested CREATE INDEX CONCURRENTLY by
launching two separate pgbench runs concurrently. This was needed so
that only a single client thread would run CREATE INDEX CONCURRENTLY,
avoiding deadlock between two CICs. However, there's a better way,
which is to use an advisory lock to prevent concurrent CICs. That's
better in part because the test code is shorter and more readable, but
mostly because it automatically scales things to launch an appropriate
number of CICs relative to the number of INSERT transactions.
As committed, typically half to three-quarters of the CIC transactions
were pointless because the INSERT transactions had already stopped.
In passing, remove background_pgbench, which was added to support
these tests and isn't needed anymore. We can always put it back
if we find a use for it later.
Back-patch to v12; older pgbench versions lack the
conditional-execution features needed for this method.
Tom Lane and Andrey Borodin
Discussion: https://postgr.es/m/139687.1635277318@sss.pgh.pa.us
CIC and REINDEX CONCURRENTLY assume backends see their catalog changes
no later than each backend's next transaction start. That failed to
hold when a backend absorbed a relevant invalidation in the middle of
running RelationBuildDesc() on the CIC index. Queries that use the
resulting index can silently fail to find rows. Fix this for future
index builds by making RelationBuildDesc() loop until it finishes
without accepting a relevant invalidation. It may be necessary to
reindex to recover from past occurrences; REINDEX CONCURRENTLY suffices.
Back-patch to 9.6 (all supported versions).
Noah Misch and Andrey Borodin, reviewed (in earlier versions) by Andres
Freund.
Discussion: https://postgr.es/m/20210730022548.GA1940096@gust.leadboat.com
For non-MSVC builds this is make's $(CURDIR), while for MSVC builds it
is $topdir/$Config/$module. The directory is added as the second element
in the PATH, so that the install location takes precedence, but the
added PATH element takes precedence over the rest of the PATH.
The reason for this is to allow tests to find built products that are
not installed, such as the libpq_pipeline test driver.
The libpq_pipeline test is adjusted to take advantage of this.
Based on a suggestion from Andres Freund.
Backpatch to release 14.
Discussion: https://postgr.es/m/4941f5a5-2d50-1a0e-6701-14c5fefe92d6@dunslane.net
Commit 1b5d797cd4 intended to relax the lock level used to rename
indexes, but inadvertently allowed *any* relation to be renamed with a
lowered lock level, as long as the command is spelled ALTER INDEX.
That's undesirable for other relation types, so retry the operation with
the higher lock if the relation turns out not to be an index.
After this fix, ALTER INDEX <sometable> RENAME will require access
exclusive lock, which it didn't before.
Author: Nathan Bossart <bossartn@amazon.com>
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reported-by: Onder Kalaci <onderk@microsoft.com>
Discussion: https://postgr.es/m/PH0PR21MB1328189E2821CDEC646F8178D8AE9@PH0PR21MB1328.namprd21.prod.outlook.com
ldapsearch's deprecated -h/-p arguments were removed, need to use -H now -
which has been around for over 20 years.
As perltidy insists on reflowing the parameters anyway, change order and
"phrasing" to yield a less confusing layout (per suggestion from Tom Lane).
Discussion: https://postgr.es/m/20211009233850.wvr6apcrw2ai6cnj@alap3.anarazel.de
Backpatch: 11-, where the tests were added.
An update such as "UPDATE ... SET fld[n].subfld = whatever"
failed if the array elements were domains rather than plain
composites. That's because isAssignmentIndirectionExpr()
failed to cope with the CoerceToDomain node that would appear
in the expression tree in this case. The result would typically
be a crash, and even if we accidentally didn't crash, we'd not
correctly preserve other fields of the same array element.
Per report from Onder Kalaci. Back-patch to v11 where arrays of
domains came in.
Discussion: https://postgr.es/m/PH0PR21MB132823A46AA36F0685B7A29AD8BD9@PH0PR21MB1328.namprd21.prod.outlook.com
The grammar of this command run on indexes with column names has always
been authorized by the parser, and it has never been documented.
Since 911e702, it is possible to define opclass parameters as of CREATE
INDEX, which actually broke the old case of ALTER INDEX/TABLE where
relation-level parameters n_distinct and n_distinct_inherited could be
defined for an index (see 76a47c0 and its thread where this point has
been touched, still remained unused). Attempting to do that in v13~
would cause the index to become unusable, as there is a new dedicated
code path to load opclass parameters instead of the relation-level ones
previously available. Note that it is possible to fix things with a
manual catalog update to bring the relation back online.
This commit disables this command for now as the use of column names for
indexes does not make sense anyway, particularly when it comes to index
expressions where names are automatically computed. One way to properly
support this case properly in the future would be to use column numbers
when it comes to indexes, in the same way as ALTER INDEX .. ALTER COLUMN
.. SET STATISTICS.
Partitioned indexes were already blocked, but not indexes. Some tests
are added for both cases.
There was some code in ANALYZE to enforce n_distinct to be used for an
index expression if the parameter was defined, but just remove it for
now until/if there is support for this (note that index-level parameters
never had support in pg_dump either, previously), so this was just dead
code.
Reported-by: Matthijs van der Vleuten
Author: Nathan Bossart, Michael Paquier
Reviewed-by: Vik Fearing, Dilip Kumar
Discussion: https://postgr.es/m/17220-15d684c6c2171a83@postgresql.org
Backpatch-through: 13
If a function-in-FROM laterally references the output of some sub-SELECT
earlier in the FROM clause, and we are able to flatten that sub-SELECT
into the outer query, the expression(s) copied into the function RTE
missed being processed by eval_const_expressions. This'd lead to trouble
and probable crashes at execution if such expressions contained
named-argument function call syntax or functions with defaulted arguments.
The bug is masked if the query contains any explicit JOIN syntax, which
may help explain why we'd not noticed.
Per bug #17227 from Bernd Dorn. This is an oversight in commit 7266d0997,
so back-patch to v13 where that came in.
Discussion: https://postgr.es/m/17227-5a28ed1512189fa4@postgresql.org
The test code added with ff9f111bce fails under valgrind, and probably
other slow cases too, because if (say) autovacuum runs in between and
produces WAL of its own, the large INSERT fails to account for that in
the LSN calculations. Rewrite to use a DO loop.
Per complaint from Andres Freund
Backpatch to all branches.
Discussion: https://postgr.es/m/20211013180338.5guyqzpkcisqugrl@alap3.anarazel.de
Incrementing the level of the call stack reported is useful for
debugging purposes as it allows to control which part of the test is
exactly failing, especially if a test is structured with subroutines
that call routines from Test::More.
This adds more incrementations of $Test::Builder::Level where debugging
gets improved (for example it does not make sense for some paths like
pg_rewind where long subroutines are used).
A note is added to src/test/perl/README about that, based on a
suggestion from Andrew Dunstan and a wording coming from both of us.
Usage of Test::Builder::Level has spread in 12, so a backpatch down to
this version is done.
Reviewed-by: Andrew Dunstan, Peter Eisentraut, Daniel Gustafsson
Discussion: https://postgr.es/m/YV1CCFwgM1RV1LeS@paquier.xyz
Backpatch-through: 12
This fixes a loss of precision that occurs when the first input is
very close to 1, so that its logarithm is very small.
Formerly, during the initial low-precision calculation to estimate the
result weight, the logarithm was computed to a local rscale that was
capped to NUMERIC_MAX_DISPLAY_SCALE (1000). However, the base may be
as close as 1e-16383 to 1, hence its logarithm may be as small as
1e-16383, and so the local rscale needs to be allowed to exceed 16383,
otherwise all precision is lost, leading to a poor choice of rscale
for the full-precision calculation.
Fix this by removing the cap on the local rscale during the initial
low-precision calculation, as we already do in the full-precision
calculation. This doesn't change the fact that the initial calculation
is a low-precision approximation, computing the logarithm to around 8
significant digits, which is very fast, especially when the base is
very close to 1.
Patch by me, reviewed by Alvaro Herrera.
Discussion: https://postgr.es/m/CAEZATCV-Ceu%2BHpRMf416yUe4KKFv%3DtdgXQAe5-7S9tD%3D5E-T1g%40mail.gmail.com
3c5b0685b9 used setFilePointer() to set the position of the filehandle, but
passed the wrong filehandle, always leaving the position at 0. Instead of just
fixing that, remove use of setFilePointer(), we have a perl fd at this point,
so we can just use perl's seek().
Additionally, the perl filehandle wasn't closed, just the windows filehandle.
Reviewed-By: Andrew Dunstan <andrew@dunslane.net>
Author: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20211003173038.64mmhgxctfqn7wl6@alap3.anarazel.de
Backpatch: 9.6-, like 3c5b0685b9
Prior to v14, we insisted that the query in RETURN QUERY be of a type
that returns tuples. (For instance, INSERT RETURNING was allowed,
but not plain INSERT.) That happened indirectly because we opened a
cursor for the query, so spi.c checked SPI_is_cursor_plan(). As a
consequence, the error message wasn't terribly on-point, but at least
it was there.
Commit 2f48ede08 lost this detail. Instead, plain RETURN QUERY
insisted that the query be a SELECT (by checking for SPI_OK_SELECT)
while RETURN QUERY EXECUTE failed to check the query type at all.
Neither of these changes was intended.
The only convenient place to check this in the EXECUTE case is inside
_SPI_execute_plan, because we haven't done parse analysis until then.
So we need to pass down a flag saying whether to enforce that the
query returns tuples. Fortunately, we can squeeze another boolean
into struct SPIExecuteOptions without an ABI break, since there's
padding space there. (It's unlikely that any extensions would
already be using this new struct, but preserving ABI in v14 seems
like a smart idea anyway.)
Within spi.c, it seemed like _SPI_execute_plan's parameter list
was already ridiculously long, and I didn't want to make it longer.
So I thought of passing SPIExecuteOptions down as-is, allowing that
parameter list to become much shorter. This makes the patch a bit
more invasive than it might otherwise be, but it's all internal to
spi.c, so that seems fine.
Per report from Marc Bachmann. Back-patch to v14 where the
faulty code came in.
Discussion: https://postgr.es/m/1F2F75F0-27DF-406F-848D-8B50C7EEF06A@gmail.com
The previous approach didn't really work on windows, due to the PATH separator
being ';' not ':'. Instead of making the PATH change more complicated,
reference the binary using the TESTDIR environment.
Reported-By: Andres Freund <andres@anarazel.de>
Suggested-By: Andrew Dunstan <andrew@dunslane.net>
Discussion: https://postgr.es/m/20210930214040.odkdd42vknvzifm6@alap3.anarazel.de
Backpatch: 14-, where the test was introduced.
Commit ff9f111bce added some test code that's unportable and doesn't
add meaningful coverage. Remove it rather than try and get it to work
everywhere.
While at it, fix a typo in a log message added by the aforementioned
commit.
Backpatch to 14.
Discussion: https://postgr.es/m/3000074.1632947632@sss.pgh.pa.us
Computing related timestamps by subtracting "N days" is sensitive
to the prevailing timezone, since we interpret that as "same local
time on the N'th prior day". Even though the intervals in question
are only two to four days, through remarkable bad luck they managed
to cross the end of Ramadan in 2014, causing the test's output to
change if timezone is set to Africa/Casablanca. (Maybe in other
Muslim areas as well; I didn't check.) There's absolutely no reason
for this test to exercise interval subtraction, so just get rid of
that and use plain timestamptz constants representing the intended
values.
Per report from Andres Freund. Back-patch to v10 where this test
script came in.
Discussion: https://postgr.es/m/20210930183641.7lh4jhvpipvromca@alap3.anarazel.de
First, as pointed out by Tom Lane and Michael Paquier, I failed to
realize that Windows' PostgresNode needs an extra pg_hba.conf line
(added by PostgresNode->set_replication_conf, called internally by
->init() when 'allows_streaming=>1' is given -- but I purposefully
omitted that). I think a good fix should be to have nodes with only
'has_archiving=>1' set up for replication too, but that's a bigger
discussion. Fix it by calling ->set_replication_conf, which is not
unprecedented, as pointed out by Andrew Dunstan.
I also forgot to uncomment a ->finish() call for a pumpable IPC::Run
file descriptor. Apparently this is innocuous in almost all platforms.
Backpatch to 14. The older branches were added this file too, but not
this particular part of the test.
Discussion: https://postgr.es/m/3000074.1632947632@sss.pgh.pa.us
Discussion: https://postgr.es/m/YVT7qwhR8JmC2kfz@paquier.xyz
Physical replication always ships WAL segment files to replicas once
they are complete. This is a problem if one WAL record is split across
a segment boundary and the primary server crashes before writing down
the segment with the next portion of the WAL record: WAL writing after
crash recovery would happily resume at the point where the broken record
started, overwriting that record ... but any standby or backup may have
already received a copy of that segment, and they are not rewinding.
This causes standbys to stop following the primary after the latter
crashes:
LOG: invalid contrecord length 7262 at A8/D9FFFBC8
because the standby is still trying to read the continuation record
(contrecord) for the original long WAL record, but it is not there and
it will never be. A workaround is to stop the replica, delete the WAL
file, and restart it -- at which point a fresh copy is brought over from
the primary. But that's pretty labor intensive, and I bet many users
would just give up and re-clone the standby instead.
A fix for this problem was already attempted in commit 515e3d84a0, but
it only addressed the case for the scenario of WAL archiving, so
streaming replication would still be a problem (as well as other things
such as taking a filesystem-level backup while the server is down after
having crashed), and it had performance scalability problems too; so it
had to be reverted.
This commit fixes the problem using an approach suggested by Andres
Freund, whereby the initial portion(s) of the split-up WAL record are
kept, and a special type of WAL record is written where the contrecord
was lost, so that WAL replay in the replica knows to skip the broken
parts. With this approach, we can continue to stream/archive segment
files as soon as they are complete, and replay of the broken records
will proceed across the crash point without a hitch.
Because a new type of WAL record is added, users should be careful to
upgrade standbys first, primaries later. Otherwise they risk the standby
being unable to start if the primary happens to write such a record.
A new TAP test that exercises this is added, but the portability of it
is yet to be seen.
This has been wrong since the introduction of physical replication, so
backpatch all the way back. In stable branches, keep the new
XLogReaderState members at the end of the struct, to avoid an ABI
break.
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Nathan Bossart <bossartn@amazon.com>
Discussion: https://postgr.es/m/202108232252.dh7uxf6oxwcy@alvherre.pgsql
Updates/Deletes on a partition were allowed even without replica identity
after the parent table was added to a publication. This would later lead
to an error on subscribers. The reason was that we were not invalidating
the partition's relcache and the publication information for partitions
was not getting rebuilt. Similarly, we were not invalidating the
partitions' relcache after dropping a partitioned table from a publication
which will prohibit Updates/Deletes on its partition without replica
identity even without any publication.
Reported-by: Haiying Tang
Author: Hou Zhijie and Vignesh C
Reviewed-by: Vignesh C and Amit Kapila
Backpatch-through: 13
Discussion: https://postgr.es/m/OS0PR01MB6113D77F583C922F1CEAA1C3FBD29@OS0PR01MB6113.jpnprd01.prod.outlook.com
Contrary to the output of native perl, Msys perl generates outputs with
CRLFs characters. There are already places in the TAP code where CRLFs
(\r\n) are automatically converted to LF (\n) on Msys, but we missed a
couple of places when running commands and using their output for
comparison, that would lead to failures.
This problem has been found thanks to the test added in 5adb067 using
TestLib::command_checks_all(), but after a closer look more code paths
were missing a filter.
This is backpatched all the way down to prevent any surprises if a new
test is introduced in stable branches.
Reviewed-by: Andrew Dunstan, Álvaro Herrera
Discussion: https://postgr.es/m/1252480.1631829409@sss.pgh.pa.us
Backpatch-through: 9.6
Commit 55dc86eca changed pull_varnos to use (if possible) the associated
ph_eval_at for a PlaceHolderVar. I missed a fine point though: we might
be looking at a PHV in the quals or tlist of a child appendrel, in which
case we need to compute a ph_eval_at value that's been translated in the
same way that the PHV itself has been (cf. adjust_appendrel_attrs).
Fortunately, enough info is available in the PlaceHolderInfo to make
such translation possible without additional outside data, so we don't
need another round of uglification of planner APIs. This is a little
bit complicated, but since it's a hard-to-hit corner case, I'm not much
worried about adding cycles here.
Per report from Jaime Casanova. Back-patch to v12, like the previous
commit.
Discussion: https://postgr.es/m/20210915230959.GB17635@ahch-to
The rewriter transformation for SEARCH BREADTH FIRST produces a
FieldSelect on a Var of type RECORD, where the Var references the
recursive union's worktable output. EXPLAIN VERBOSE failed to handle
this case, because it only expected such Vars to appear in CteScans
not WorkTableScans. Fix that, and add some test cases exercising
EXPLAIN on SEARCH and CYCLE queries.
In principle this oversight is an old bug, but it seems that the
case is unreachable without SEARCH BREADTH FIRST, because the
parser fails when attempting to create such a reference manually.
So for today I'll just patch HEAD/v14. Someday we might find that
the code portion of this patch needs to be back-patched further.
Per report from Atsushi Torikoshi.
Discussion: https://postgr.es/m/5bafa66ad529e11860339565c9e7c166@oss.nttdata.com
It's possible for us to copy an AlternativeSubPlan expression node
into multiple places, for example the scan quals of several
partition children. Then it's possible that we choose a different
one of the alternatives as optimal in each place. Commit 41efb8340
failed to consider this scenario, so its attempt to remove "unused"
subplans could remove subplans that were still used elsewhere.
Fix by delaying the removal logic until we've examined all the
AlternativeSubPlans in a given query level. (This does assume that
AlternativeSubPlans couldn't get copied to other query levels, but
for the foreseeable future that's fine; cf qual_is_pushdown_safe.)
Per report from Rajkumar Raghuwanshi. Back-patch to v14
where the faulty logic came in.
Discussion: https://postgr.es/m/CAKcux6==O3NNZC3bZ2prRYv3cjm3_Zw1GfzmOjEVqYN4jub2+Q@mail.gmail.com
We have long forbidden fetching backwards from a NO SCROLL cursor,
but the prohibition didn't extend to cases in which we rewind the
query altogether and then re-fetch forwards. I think the reason is
that this logic was mainly meant to protect plan nodes that can't
be run in the reverse direction. However, re-reading the query output
is problematic if the query is volatile (which includes SELECT FOR
UPDATE, not just queries with volatile functions): the re-read can
produce different results, which confuses the cursor navigation logic
completely. Another reason for disliking this approach is that some
code paths will either fetch backwards or rewind-and-fetch-forwards
depending on the distance to the target row; so that seemingly
identical use-cases may or may not draw the "cursor can only scan
forward" error. Hence, let's clean things up by disallowing rewind
as well as fetch-backwards in a NO SCROLL cursor.
Ordinarily we'd only make such a definitional change in HEAD, but
there is a third reason to consider this change now. Commit ba2c6d6ce
created some new user-visible anomalies for non-scrollable cursors
WITH HOLD, in that navigation in the cursor result got confused if the
cursor had been partially read before committing. The only good way
to resolve those anomalies is to forbid rewinding such a cursor, which
allows removal of the incorrect cursor state manipulations that
ba2c6d6ce added to PersistHoldablePortal.
To minimize the behavioral change in the back branches (including
v14), refuse to rewind a NO SCROLL cursor only when it has a holdStore,
ie has been held over from a previous transaction due to WITH HOLD.
This should avoid breaking most applications that have been sloppy
about whether to declare cursors as scrollable. We'll enforce the
prohibition across-the-board beginning in v15.
Back-patch to v11, as ba2c6d6ce was.
Discussion: https://postgr.es/m/3712911.1631207435@sss.pgh.pa.us
If we copy data-modifying CTEs from the original query to a replacement
query (from a DO INSTEAD rule), we must set hasModifyingCTE properly
in the replacement query. Failure to do this can cause various
unpleasantness, such as unsafe usage of parallel plans. The code also
neglected to propagate hasRecursive, though that's only cosmetic at
the moment.
A difficulty arises if the rule action is an INSERT...SELECT. We
attach the original query's RTEs and CTEs to the sub-SELECT Query, but
data-modifying CTEs are only allowed to appear in the topmost Query.
For the moment, throw an error in such cases. It would probably be
possible to avoid this error by attaching the CTEs to the top INSERT
Query instead; but that would require a bunch of new code to adjust
ctelevelsup references. Given the narrowness of the use-case, and
the need to back-patch this fix, it does not seem worth the trouble
for now. We can revisit this if we get field complaints.
Per report from Greg Nancarrow. Back-patch to all supported branches.
(The test case added here does not fail before v10, but there are
plenty of places checking top-level hasModifyingCTE in 9.6, so I have
no doubt that this code change is necessary there too.)
Greg Nancarrow and Tom Lane
Discussion: https://postgr.es/m/CAJcOf-f68DT=26YAMz_i0+Au3TcLO5oiHY5=fL6Sfuits6r+_w@mail.gmail.com
Discussion: https://postgr.es/m/CAJcOf-fAdj=nDKMsRhQzndm-O13NY4dL6xGcEvdX5Xvbbi0V7g@mail.gmail.com
Commit 01e658fa74 added hash support for row types. This also added
support for hashing anonymous record types, using the same approach
that the type cache uses for comparison support for record types: It
just reports that it works, but it might fail at run time if a
component type doesn't actually support the operation. We get away
with that for comparison because most types support that. But some
types don't support hashing, so the current state can result in
failures at run time where the planner chooses hashing over sorting,
whereas that previously worked if only sorting was an option.
We do, however, want the record hashing support for path tracking in
recursive unions, and the SEARCH and CYCLE clauses built on that. In
that case, hashing is the only plan option. So enable that, this
commit implements the following approach: The type cache does not
report that hashing is available for the record type. This undoes
that part of 01e658fa74. Instead, callers that require hashing no
matter what can override that result themselves. This patch only
touches the callers to make the aforementioned recursive query cases
work, namely the parse analysis of unions, as well as the hash_array()
function.
Reported-by: Sait Talha Nisanci <sait.nisanci@microsoft.com>
Bug: #17158
Discussion: https://www.postgresql.org/message-id/flat/17158-8a2ba823982537a4%40postgresql.org
Updates/Deletes on a relation were allowed even without replica identity
after we define the publication for all tables. This would later lead to
an error on subscribers. The reason was that for such publications we were
not invalidating the relcache and the publication information for
relations was not getting rebuilt. Similarly, we were not invalidating the
relcache after dropping of such publications which will prohibit
Updates/Deletes without replica identity even without any publication.
Author: Vignesh C and Hou Zhijie
Reviewed-by: Hou Zhijie, Kyotaro Horiguchi, Amit Kapila
Backpatch-through: 10, where it was introduced
Discussion: https://postgr.es/m/CALDaNm0pF6zeWqCA8TCe2sDuwFAy8fCqba=nHampCKag-qLixg@mail.gmail.com
Modern POSIX seems to require strtod() to accept "-NaN", but there's
nothing about NaN in SUSv2, and some of our oldest buildfarm members
don't like it. Let's try writing it as -'NaN' instead; that seems
to produce the same result, at least on Intel hardware.
Per buildfarm.
The IEEE 754 standard allows a wide variety of bit patterns for NaNs,
of which at least two ("NaN" and "-NaN") are pretty easy to produce
from SQL on most machines. This is problematic because our btree
comparison functions deem all NaNs to be equal, but our float hash
functions know nothing about NaNs and will happily produce varying
hash codes for them. That causes unexpected results from queries
that hash a column containing different NaN values. It could also
produce unexpected lookup failures when using a hash index on a
float column, i.e. "WHERE x = 'NaN'" will not find all the rows
it should.
To fix, special-case NaN in the float hash functions, not too much
unlike the existing special case that forces zero and minus zero
to hash the same. I arranged for the most vanilla sort of NaN
(that coming from the C99 NAN constant) to still have the same
hash code as before, to reduce the risk to existing hash indexes.
I dithered about whether to back-patch this into stable branches,
but ultimately decided to do so. It's a clear improvement for
queries that hash internally. If there is anybody who has -NaN
in a hash index, they'd be well advised to re-index after applying
this patch ... but the misbehavior if they don't will not be much
worse than the misbehavior they had before.
Per bug #17172 from Ma Liangzhu.
Discussion: https://postgr.es/m/17172-7505bea9e04e230f@postgresql.org