We've long had a practice of making views temporary by default if they
reference any temporary tables. However the implementation was pretty
incomplete, in that it only searched for RangeTblEntry references to
temp relations. Uses of temporary types, regclass constants, etc
were not detected even though the dependency mechanism considers them
grounds for dropping the view. Thus a view not believed to be temp
could silently go away at session exit anyhow.
To improve matters, replace the ad-hoc isQueryUsingTempRelation()
logic with use of the dependency-based infrastructure introduced by
commit 572c40ba9. This is complete by definition, and it's less code
overall.
While we're at it, we can also extend the warning NOTICE (or ERROR
in the case of a materialized view) to mention one of the temp
objects motivating the classification of the view as temp, as was
done for functions in 572c40ba9.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Jim Jones <jim.jones@uni-muenster.de>
Discussion: https://postgr.es/m/19cf6ae1-04cd-422c-a760-d7e75fe6cba9@uni-muenster.de
Previously, we would only consider indexes marked indisvalid as usable
for INSERT ON CONFLICT. But that's problematic during CREATE INDEX
CONCURRENTLY and REINDEX CONCURRENTLY, because concurrent transactions
would end up with inconsistents lists of inferred indexes, leading to
deadlocks and spurious errors about unique key violations (because two
transactions are operating on different indexes for the speculative
insertion tokens). Change this function to return indexes even if
invalid. This fixes the spurious errors and deadlocks.
Because such indexes might not be complete, we still need uniqueness to
be verified in a different way. We do that by requiring that at least
one index marked valid is part of the set of indexes returned. It is
that index that is going to help ensure that the inserted tuple is
indeed unique.
This does not fix similar problems occurring with partitioned tables or
with named constraints. These problems will be fixed in follow-up
commits.
We have no user report of this problem, even though it exists in all
branches. Because of that and given that the fix is somewhat tricky, I
decided not to backpatch for now.
Author: Mihail Nikalayeu <mihailnikalayeu@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Discussion: https://postgr.es/m/CANtu0ogv+6wqRzPK241jik4U95s1pW3MCZ3rX5ZqbFdUysz7Qw@mail.gmail.com
index-killtuples test depends on the contrib modules btree_gin and
btree_gist, which would not be installed in a temporary installation
with an execution of the main isolation test suite like this one:
make -C src/test/isolation/ check
src/test/isolation/ should not depend on contrib/, and EXTRA_INSTALL has
no effect in this case as this test suite uses its own Makefile rules.
This commit moves index-killtuples into its new module, called "index",
whose name looks like the best fit there can be as it depends on more
than one index AM. btree_gin and btree_gist are now pulled in the
temporary installation with EXTRA_INSTALL. The test is renamed to
"killtuples", for simplicity.
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Suggested-by: Andres Freund <andres@anarazel.de>
Suggested-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/aKJsWedftW7UX1WM@paquier.xyz
This replaces some uses of pg_attribute_aligned() with the standard
alignas() for cases where extended alignment (larger than max_align_t)
is required.
This patch stipulates that all supported compilers must support
alignments up to PG_IO_ALIGN_SIZE, but that seems pretty likely.
We can then also desupport the case where direct I/O is disabled
because pg_attribute_aligned is not supported.
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/46f05236-d4d4-4b4e-84d4-faa500f14691%40eisentraut.org
ba2a3c2302 has added a way to check if a buffer is spread across
multiple pages with some NUMA information, via a new view
pg_buffercache_numa that depends on pg_buffercache_numa_pages(), a SQL
function. These can only be queried when support for libnuma exists,
generating an error if not.
However, it can be useful to know how shared buffers and OS pages map
when NUMA is not supported or not available. This commit expands the
capabilities around pg_buffercache_numa:
- pg_buffercache_numa_pages() is refactored as an internal function,
able to optionally process NUMA. Its SQL definition prior to this
commit is still around to ensure backward-compatibility with v1.6.
- A SQL function called pg_buffercache_os_pages() is added, able to work
with or without NUMA.
- The view pg_buffercache_numa is redefined to use
pg_buffercache_os_pages().
- A new view is added, called pg_buffercache_os_pages. This ignores
NUMA for its result processing, for a better efficiency.
The implementation is done so as there is no code duplication between
the NUMA and non-NUMA views/functions, relying on one internal function
that does the job for all of them. The module is bumped to v1.7.
Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Reviewed-by: Mircea Cadariu <cadariu.mircea@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/Z/fFA2heH6lpSLlt@ip-10-97-1-34.eu-west-3.compute.internal
We don't have an official concept of temporary functions. (You can
make one explicitly in pg_temp, but then you have to explicitly
schema-qualify it on every call.) However, until now we were quite
laissez-faire about whether a non-temporary function could depend on
a temporary object, such as a temp table or view. If one does,
it will silently go away at end of session, due to the automatic
DROP ... CASCADE on the session's temporary objects. People have
complained that that's surprising; however, we can't really forbid
it because other people (including our own regression tests) rely
on being able to do it. Let's compromise by emitting a NOTICE
at CREATE FUNCTION time. This is somewhat comparable to our
ancient practice of emitting a NOTICE when forcing a view to
become temp because it depends on temp tables.
Along the way, refactor recordDependencyOnExpr() so that the
dependencies of an expression can be combined with other
dependencies, instead of being emitted separately and perhaps
duplicatively.
We should probably make the implementation of temp-by-default
views use the same infrastructure used here, but that's for
another patch. It's unclear whether there are any other object
classes that deserve similar treatment.
Author: Jim Jones <jim.jones@uni-muenster.de>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/19cf6ae1-04cd-422c-a760-d7e75fe6cba9@uni-muenster.de
This request allows a support function to replace a function call
appearing in FROM (typically a set-returning function) with an
equivalent SELECT subquery. The subquery will then be subject
to the planner's usual optimizations, potentially allowing a much
better plan to be generated. While the planner has long done this
automatically for simple SQL-language functions, it's now possible
for extensions to do it for functions outside that group.
Notably, this could be useful for functions that are presently
implemented in PL/pgSQL and work by generating and then EXECUTE'ing
a SQL query.
Author: Paul A Jungwirth <pj@illuminatedcomputing.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/09de6afa-c33d-4d94-a5cb-afc6cea0d2bb@illuminatedcomputing.com
The existing range_minus function raises an exception when the range is
"split", because then the result can't be represented by a single range.
For example '[0,10)'::int4range - '[4,5)' would be '[0,4)' and '[5,10)'.
This commit adds new set-returning functions so that callers can get
results even in the case of splits. There is no risk of an exception for
multiranges, but a set-returning function lets us handle them the same
way we handle ranges.
Both functions return zero results if the subtraction would give an
empty range/multirange.
The main use-case for these functions is to implement UPDATE/DELETE FOR
PORTION OF, which must compute the application-time of "temporal
leftovers": the part of history in an updated/deleted row that was not
changed. To preserve the untouched history, we will implicitly insert
one record for each result returned by range/multirange_minus_multi.
Using a set-returning function will also let us support user-defined
types for application-time update/delete in the future.
Author: Paul A. Jungwirth <pj@illuminatedcomputing.com>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/ec498c3d-5f2b-48ec-b989-5561c8aa2024%40illuminatedcomputing.com
There was no easy way to run specific tests in the meson based builds.
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Reviewed-by: Jian He <jian.universality@gmail.com>
Discussion: postgr.es/m/CAExHW5tK-QqayUN0%2BN3MF5bjV6vLKDCkRuGwoDJwc7vGjwCygQ%40mail.gmail.com
Makes the code a little simpler.
The old implementation accepted trailing whitespace, but that was
unnecessary. Firstly, its sibling function for parsing decimals,
strtodouble(), does not accept trailing whitespace. Secondly, none of
the callers can pass a string with trailing whitespace to it.
In the passing, check specifically for ERANGE before printing the "out
of range" error. On some systems, strtoul() and strtod() return EINVAL
on an empty or all-spaces string, and "invalid input syntax" is more
appropriate for that than "out of range". For the existing
strtodouble() function this is purely academical because it's never
called with errorOK==false, but let's be tidy. (Perhaps we should
remove the dead codepaths altogether, but I'll leave that for another
day.)
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Yuefei Shi <shiyuefei1004@gmail.com>
Reviewed-by: Neil Chen <carpenter.nail.cz@gmail.com>
Discussion: https://www.postgresql.org/message-id/861dd5bd-f2c9-4ff5-8aa0-f82bdb75ec1f@iki.fi
This changes a few union members that only existed to ensure
alignments and replaces them with the C11 alignas specifier.
This change only uses fundamental alignments (meaning approximately
alignments of basic types), which all C11 compilers must support.
There are opportunities for similar changes using extended alignments,
for example in PGIOAlignedBlock, but these are not necessarily
supported by all compilers, so they are kept as a separate change.
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/46f05236-d4d4-4b4e-84d4-faa500f14691%40eisentraut.org
This conforms more closely with the style of other struct initializers
in the code base. Initializing multiple fields on a single line is
unpopular in part because pgindent won't permit a space after the comma
before the next field's period.
Author: Melanie Plageman <melanieplageman@gmail.com>
Reported-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Discussion: https://postgr.es/m/87see87fnq.fsf%40wibble.ilmari.org
This generates a more accurate code count because 'make distclean'
doesn't always remove build files.
Author: idea from David Rowley
Discussion: https://postgr.es/m/aR4hoOotVHB7TXo5@momjian.us
Backpatch-through: master
During pruning and freezing in phase I of vacuum, we delay clearing
all_visible and all_frozen in the presence of dead items. This allows
opportunistic freezing if the page would otherwise be fully frozen,
since those dead items are later removed in vacuum phase III.
To move the VM update into the same WAL record that
prunes and freezes tuples, we must know whether the page will
be marked all-visible/all-frozen before emitting WAL. Previously we
waited until after emitting WAL to update all_visible/all_frozen to
their correct values.
The only barrier to updating these flags immediately after deciding
whether to opportunistically freeze was that while emitting WAL for a
record freezing tuples, we use the pre-corrected value of all_frozen to
compute the snapshot conflict horizon. By determining the conflict
horizon earlier, we can update the flags immediately after making the
opportunistic freeze decision.
This is required to set the VM in the XLOG_HEAP2_PRUNE_VACUUM_SCAN
record emitted by pruning and freezing.
Author: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>
Discussion: https://postgr.es/m/flat/CAAKRu_ZMw6Npd_qm2KM%2BFwQ3cMOMx1Dh3VMhp8-V7SOLxdK9-g%40mail.gmail.com
When running in Docker, the container may not have privileges needed by
get_mempolicy(). This is called by numa_available() in libnuma, but
versions prior to 2.0.19 did not expect that. The numa_available() call
seemingly succeeds, but then we get unexpected failures when trying to
query status of pages:
postgres =# select * from pg_shmem_allocations_numa;
ERROR: XX000: failed NUMA pages inquiry status: Operation not
permitted
LOCATION: pg_get_shmem_allocations_numa, shmem.c:691
The best solution is to call get_mempolicy() first, and proceed to
numa_available() only when it does not fail with EPERM. Otherwise we'd
need to treat older libnuma versions as insufficient, which seems a bit
too harsh, as this only affects containerized systems.
Fix by me, based on suggestions by Christoph. Backpatch to 18, where the
NUMA functions were introduced.
Reported-by: Christoph Berg <myon@debian.org>
Reviewed-by: Christoph Berg <myon@debian.org>
Discussion: https://postgr.es/m/aPDZOxjrmEo_1JRG@msg.df7cb.de
Backpatch-through: 18
The upstream timezone code uses a bool variable as an array subscript.
Back when PostgreSQL's bool was char, this would have caused a warning
from gcc -Wchar-subscripts, which is included in -Wall. But this has
been obsolete since probably commit d26a810ebf, but certainly since
bool is now the C standard bool. So we can remove this deviation from
the upstream code, to make future code merges simpler.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/9ad2749f-77ab-4ecb-a321-1ca915480b05%40eisentraut.org
Commit 792353f7d5 updated the pg_dump and pg_dumpall documentation to
clarify which statistics are not included in their output. The pg_upgrade
documentation contained a nearly identical description, but it was not updated
at the same time.
This commit updates the pg_upgrade documentation to match those changes.
Backpatch to v18, where commit 792353f7d5 was backpatched to.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Bruce Momjian <bruce@momjian.us>
Discussion: https://postgr.es/m/CAHGQGwFnfgdGz8aGWVzgFCFwoWQU7KnFFjmxinf4RkQAkzmR+w@mail.gmail.com
Backpatch-through: 18
MSVCRT predated C99 and invented non-standard placeholders for 64-bit
numbers, and then later used them in standard macros when C99
<inttypes.h> arrived. The macros just use %lld etc when building with
UCRT, so there should be no way for our interposed sprintf.c code to
receive the pre-standard kind these days. Time to drop the code that
parses them.
That code was in fact already dead when commit 962da900 landed, as we'd
disclaimed MSVCRT support a couple of weeks earlier in commit 1758d424,
but patch development overlapped and the history of these macros hadn't
been investigated.
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/4d8b1a67-aab2-4429-b44b-f03988095939%40eisentraut.org
If both sides of the operator have most-common-value statistics,
eqjoinsel wants to check which MCVs have matches on the other side.
Formerly it did this with a dumb compare-all-the-entries loop,
which had O(N^2) behavior for long MCV lists. When that code was
written, twenty-plus years ago, that seemed tolerable; but nowadays
people frequently use much larger statistics targets, so that the
O(N^2) behavior can hurt quite a bit.
To add insult to injury, when asked for semijoin semantics, the
entire comparison loop was done over, even though we frequently
know that it will yield exactly the same results.
To improve matters, switch to using a hash table to perform the
matching. Testing suggests that depending on the data type, we may
need up to about 100 MCVs on each side to amortize the extra costs
of setting up the hash table and performing hash-value computations;
so continue to use the old looping method when there are fewer MCVs
than that.
Also, refactor so that we don't repeat the matching work unless
we really need to, which occurs only in the uncommon case where
eqjoinsel_semi decides to truncate the set of inner MCVs it
considers. The refactoring also got rid of the need to use the
presented operator's commutator. Real-world operators that are
using eqjoinsel should pretty much always have commutators, but
at the very least this saves a few syscache lookups.
Author: Ilia Evdokimov <ilya.evdokimov@tantorlabs.com>
Co-authored-by: David Geier <geidav.pg@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/20ea8bf5-3569-4e46-92ef-ebb2666debf6@tantorlabs.com
The fix for bug #19055 (commit b0cc0a71e) allowed CTE references in
sub-selects within aggregate functions to affect the semantic levels
assigned to such aggregates. It turns out this broke some related
cases, leading to assertion failures or strange planner errors such
as "unexpected outer reference in CTE query". After experimenting
with some alternative rules for assigning the semantic level in
such cases, we've come to the conclusion that changing the level
is more likely to break things than be helpful.
Therefore, this patch undoes what b0cc0a71e changed, and instead
installs logic to throw an error if there is any reference to a
CTE that's below the semantic level that standard SQL rules would
assign to the aggregate based on its contained Var and Aggref nodes.
(The SQL standard disallows sub-selects within aggregate functions,
so it can't reach the troublesome case and hence has no rule for
what to do.)
Perhaps someone will come along with a legitimate query that this
logic rejects, and if so probably the example will help us craft
a level-adjustment rule that works better than what b0cc0a71e did.
I'm not holding my breath for that though, because the previous
logic had been there for a very long time before bug #19055 without
complaints, and that bug report sure looks to have originated from
fuzzing not from real usage.
Like b0cc0a71e, back-patch to all supported branches, though
sadly that no longer includes v13.
Bug: #19106
Reported-by: Kamil Monicz <kamil@monicz.dev>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/19106-9dd3668a0734cd72@postgresql.org
Backpatch-through: 14