The old name was overly generic. An upcoming commit moves relation stats
handling into its own file, making pgstat_initstats() look even more out of
place.
Author: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20220303021600.hs34ghqcw6zcokdh@alap3.anarazel.de
A later commit will make the check more complicated than the
current (rel)->pgstat_info != NULL. It also just seems nicer to have a central
copy of the logic, even while still simple.
Author: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20220303021600.hs34ghqcw6zcokdh@alap3.anarazel.de
Valgrind animal skink shows a crash in this new code. I couldn't
reproduce the problem locally, but going by blind code inspection,
initializing insert_destrel should be sufficient to fix the problem.
When an update on a partitioned table referenced in foreign key
constraints causes a row to move from one partition to another,
the fact that the move is implemented as a delete followed by an insert
on the target partition causes the foreign key triggers to have
surprising behavior. For example, a given foreign key's delete trigger
which implements the ON DELETE CASCADE clause of that key will delete
any referencing rows when triggered for that internal DELETE, although
it should not, because the referenced row is simply being moved from one
partition of the referenced root partitioned table into another, not
being deleted from it.
This commit teaches trigger.c to skip queuing such delete trigger events
on the leaf partitions in favor of an UPDATE event fired on the root
target relation. Doing so is sensible because both the old and the new
tuple "logically" belong to the root relation.
The after trigger event queuing interface now allows passing the source
and the target partitions of a particular cross-partition update when
registering the update event for the root partitioned table. Along with
the two ctids of the old and the new tuple, the after trigger event now
also stores the OIDs of those partitions. The tuples fetched from the
source and the target partitions are converted into the root table
format, if necessary, before they are passed to the trigger function.
The implementation currently has a limitation that only the foreign keys
pointing into the query's target relation are considered, not those of
its sub-partitioned partitions. That seems like a reasonable
limitation, because it sounds rare to have distinct foreign keys
pointing to sub-partitioned partitions instead of to the root table.
This misbehavior stems from commit f56f8f8da6af (which added support for
foreign keys to reference partitioned tables) not paying sufficient
attention to commit 2f178441044b (which had introduced cross-partition
updates a year earlier). Even though the former commit goes back to
Postgres 12, we're not backpatching this fix at this time for fear of
destabilizing things too much, and because there are a few ABI breaks in
it that we'd have to work around in older branches. It also depends on
commit f4566345cf40, which had its own share of backpatchability issues
as well.
Author: Amit Langote <amitlangote09@gmail.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reported-by: Eduard Català <eduard.catala@gmail.com>
Discussion: https://postgr.es/m/CA+HiwqFvkBCmfwkQX_yBqv2Wz8ugUGiBDxum8=WvVbfU1TXaNg@mail.gmail.com
Discussion: https://postgr.es/m/CAL54xNZsLwEM1XCk5yW9EqaRzsZYHuWsHQkA2L5MOSKXAwviCQ@mail.gmail.com
createdb() didn't check for collation attributes validity, which has
to be done explicitly on ICU < 54. It also forgot to close the ICU collator
opened during the check which leaks some memory.
To fix both, add a new check_icu_locale() that does all the appropriate
verification and close the ICU collator.
initdb also had some partial check for ICU < 54. To have consistent error
reporting across major ICU versions, and get rid of the need to include ucol.h,
remove the partial check there. The backend will report an error if needed
during the post-boostrap iniitialization phase.
Author: Julien Rouhaud <julien.rouhaud@free.fr>
Discussion: https://www.postgresql.org/message-id/20220319041459.qqqiqh335sga5ezj@jrouhaud
A later commit will move the handling of the different kinds of stats into
separate files. By splitting out WAL handling in this commit that later move
will just move code around without other changes.
Author: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20220303021600.hs34ghqcw6zcokdh@alap3.anarazel.de
pgstat_report_stat() handles several types of stats, yet relation stats have
so far been handled directly in pgstat_report_stat().
A later commit will move the handling of the different kinds of stats into
separate files. By splitting out relation handling in this commit that later
move will just move code around without other changes.
Author: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20220303021600.hs34ghqcw6zcokdh@alap3.anarazel.de
cirrus-ci now defaults to killing processes still running at the end of a
script. Unfortunately we start postgres in the background, which seems
nontrivial to fix. Previously we worked around that in 770011e3f39 by using an
older agent version, but now that CIRRUS_ESCAPING_PROCESSES we should use that.
This reverts commit 770011e3f39f21f2095d3a044b72460c4efac345 "ci: windows:
Work around cirrus-ci bug causing test failures.
Discussion: https://postgr.es/m/CA+hUKGKx7k14n2nAALSvv6M_AB6oHasNBA65X6Dvo8hwfi9y0A@mail.gmail.com
b048326 has added support for SET ACCESS METHOD in ALTER TABLE, but it
has missed a few things for materialized views:
- No documentation for this clause on the ALTER MATERIALIZED VIEW page.
- psql tab completion missing.
- No regression tests.
This commit closes the gap on all the points listed above.
Author: Yugo Nagata
Discussion: https://postgr.es/m/20220316133337.5dc9740abfa24c25ec9f67f5@sraoss.co.jp
The clauses SET TABLESPACE and ALL IN TABLESPACE are supported in ALTER
MATERIALIZED VIEW for a long time, and they behave mostly like ALTER
TABLE by reusing the same code paths, but there were zero tests for
them. This commit closes the gap with new tests in tablespace.sql.
Author: Yugo Nagata
Discussion: https://postgr.es/m/20220316133337.5dc9740abfa24c25ec9f67f5@sraoss.co.jp
To improve performance of check-world, and improve debugging, without
significantly slower builds (they're cached anyway).
This makes freebsd check-world run in 8.5 minutes rather than 15 minutes.
Author: Justin Pryzby <pryzbyj@telsasoft.com>
Reviewed-By: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20220310220611.GH28503@telsasoft.com
This is useful for patches during development, but once a feature is merged,
new libraries should be added to the OS image files, rather than installed
during every CI run forever into the future.
Author: Justin Pryzby <pryzbyj@telsasoft.com>
Reviewed-By: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20220310220611.GH28503@telsasoft.com
The output of table_to_xmlschema() and allied functions includes
a regex describing valid values for these types ... but the regex
was itself invalid, as it failed to escape a literal "+" sign.
Report and fix by Renan Soares Lopes. Back-patch to all
supported branches.
Discussion: https://postgr.es/m/7f6fabaa-3f8f-49ab-89ca-59fbfe633105@me.com
Otherwise, the database encoding varies depending on the user's
environment, and so the test might fail depending on whether ICU
likes the encoding. In particular, the test fails completely
if the prevailing locale is C.
Teach xlogreader.c to decode the WAL into a circular buffer. This will
support optimizations based on looking ahead, to follow in a later
commit.
* XLogReadRecord() works as before, decoding records one by one, and
allowing them to be examined via the traditional XLogRecGetXXX()
macros and certain traditional members like xlogreader->ReadRecPtr.
* An alternative new interface XLogReadAhead()/XLogNextRecord() is
added that returns pointers to DecodedXLogRecord objects so that it's
now possible to look ahead in the WAL stream while replaying.
* In order to be able to use the new interface effectively while
streaming data, support is added for the page_read() callback to
respond to a new nonblocking mode with XLREAD_WOULDBLOCK instead of
waiting for more data to arrive.
No direct user of the new interface is included in this commit, though
XLogReadRecord() uses it internally. Existing code doesn't need to
change, except in a few places where it was accessing reader internals
directly and now needs to go through accessor macros.
Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Reviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com>
Reviewed-by: Andres Freund <andres@anarazel.de> (earlier versions)
Discussion: https://postgr.es/m/CA+hUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq=AovOddfHpA@mail.gmail.com
This paragraph has been added in the documentation by f6f0db4, but after
more discussion we found that this just makes things more confusing,
adding some cross-references between a general feature and something
only aimed at being used by developers. The original documentation is
not wrong either, and this commit brings back this part of the docs to
the same state as before f6f0db4.
Per discussion with Kyotaro Horiguchi and Thomas Munro.
Discussion: https://postgr.es/m/CA+hUKGL2uaRKu=3+bMBpejHh4k7wqzWC05aiasTsSsHGRCWa8g@mail.gmail.com
lz4frame.h was getting declared after the headers specific to Postgres,
but it needs to be included between postgres_fe.h and the internal
headers.
Issue introduced by babbbb5.
Reported-by: Justin Prysby
Discussion: https://postgr.es/m/20220317111220.GI28503@telsasoft.com
If a RowExpr is marked as returning a named composite type, we aren't
going to consult its colnames list; we'll use the attribute names
shown for the type in pg_attribute. Hence, skip storing that list,
to save a few nanoseconds when copying the expression tree around.
Discussion: https://postgr.es/m/2950001.1638729947@sss.pgh.pa.us
In commit bf7ca1587, I had the bright idea that we could make the
result of a whole-row Var (that is, foo.*) track any column aliases
that had been applied to the FROM entry the Var refers to. However,
that's not terribly logically consistent, because now the output of
the Var is no longer of the named composite type that the Var claims
to emit. bf7ca1587 tried to handle that by changing the output
tuple values to be labeled with a blessed RECORD type, but that's
really pretty disastrous: we can wind up storing such tuples onto
disk, whereupon they're not readable by other sessions.
The only practical fix I can see is to give up on what bf7ca1587
tried to do, and say that the column names of tuples produced by
a whole-row Var are always those of the underlying named composite
type, query aliases or no. While this introduces some inconsistencies,
it removes others, so it's not that awful in the abstract. What *is*
kind of awful is to make such a behavioral change in a back-patched
bug fix. But corrupt data is worse, so back-patched it will be.
(A workaround available to anyone who's unhappy about this is to
introduce an extra level of sub-SELECT, so that the whole-row Var is
referring to the sub-SELECT's output and not to a named table type.
Then the Var is of type RECORD to begin with and there's no issue.)
Per report from Miles Delahunty. The faulty commit dates to 9.5,
so back-patch to all supported branches.
Discussion: https://postgr.es/m/2950001.1638729947@sss.pgh.pa.us
Restructure things so that the functions which update the global
variables shared_map and local_map are separate from the functions
which just read and write relation map files without touching any
global variables.
In the new structure of things, write_relmap_file() writes a relmap
file but no longer performs global variable updates. A symmetric
function read_relmap_file() that just reads a file without changing
any global variables is added, and load_relmap_file(), which does
change the global variables, uses it as a subroutine.
Because write_relmap_file() no longer updates shared_map and
local_map, that logic is moved to perform_relmap_update(). However,
no similar logic is added to relmap_redo() even though it also calls
write_relmap_file(). That's because recovery must not rely on the
contents of the relation map, and therefore there is no need to
initialize it. In fact, doing so seems like a mistake, because we
might then manage to rely on the in-memory map where we shouldn't.
Patch by me, based on earlier work by Dilip Kumar. Reviewed by
Ashutosh Sharma.
Discussion: http://postgr.es/m/CA+TgmobQLgrt4AXsc0ru7aFFkzv=9fS-Q_yO69=k9WY67RCctg@mail.gmail.com
When publishing changes through a artition root, we should use the row
filter for the top-most ancestor. The relation may be added to multiple
publications, using different ancestors, and 52e4f0cd47 handled this
incorrectly. With c91f71b9dc we find the correct top-most ancestor, but
the code tried to fetch the row filter from all publications, including
those using a different ancestor etc. No row filter can be found for
such publications, which was treated as replicating all rows.
Similarly to c91f71b9dc, this seems to be a rare issue in practice. It
requires multiple publications including the same partitioned relation,
through different ancestors.
Fixed by only passing publications containing the top-most ancestor to
pgoutput_row_filter_init(), so that treating a missing row filter as
replicating all rows is correct.
Report and fix by me, test case by Hou zj. Reviews and improvements by
Amit Kapila.
Author: Tomas Vondra, Hou zj, Amit Kapila
Reviewed-by: Amit Kapila, Hou zj
Discussion: https://postgr.es/m/d26d24dd-2fab-3c48-0162-2b7f84a9c893%40enterprisedb.com
Create subroutines ExecUpdatePrologue / ExecUpdateAct /
ExecUpdateEpilogue, and similar for ExecDelete.
Introduce a new struct to be used internally in nodeModifyTable.c,
dubbed ModifyTableContext, which contains all context information needed
to perform these operations, as well as ExecInsert and others.
This allows using a different schedule and a different way of evaluating
the results of these operations, which can be exploited by a later
commit introducing support for MERGE. It also makes ExecUpdate and
ExecDelete proper shorter and (hopefully) simpler.
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Amit Langote <amitlangote09@gmail.com>
Reviewed-by: Japin Li <japinli@hotmail.com>
Reviewed-by: Zhihong Yu <zyu@yugabyte.com>
Discussion: https://postgr.es/m/202202271724.4z7xv3cf46kv@alvherre.pgsql
This adds the option to use ICU as the default locale provider for
either the whole cluster or a database. New options for initdb,
createdb, and CREATE DATABASE are used to select this.
Since some (legacy) code still uses the libc locale facilities
directly, we still need to set the libc global locale settings even if
ICU is otherwise selected. So pg_database now has three
locale-related fields: the existing datcollate and datctype, which are
always set, and a new daticulocale, which is only set if ICU is
selected. A similar change is made in pg_collation for consistency,
but in that case, only the libc-related fields or the ICU-related
field is set, never both.
Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/5e756dd6-0e91-d778-96fd-b1bcb06c161a%402ndquadrant.com
Using this system function with an in-place tablespace (created when
allow_in_place_tablespaces is enabled by specifying an empty string as
location) caused a failure when using readlink(), as the tablespace is,
in this case, not a symbolic link in pg_tblspc/ but a directory.
Rather than getting a failure, the commit changes
pg_tablespace_location() so as a relative path to the data directory is
returned for in-place tablespaces, to make a difference between
tablespaces created when allow_in_place_tablespaces is enabled or not.
Getting a path rather than an empty string that would match the CREATE
TABLESPACE command in this case is more useful for tests that would like
to rely on this function.
While on it, a regression test is added for this case. This is simple
to add in the main regression test suite thanks to regexp_replace() to
mask the part of the tablespace location dependent on its OID.
Author: Michael Paquier
Reviewed-by: Kyotaro Horiguchi, Thomas Munro
Discussion: https://postgr.es/m/YiG1RleON1WBcLnX@paquier.xyz
Commit 83fd4532a7 allowed publishing of changes via ancestors, for
publications defined with publish_via_partition_root. But the way
the ancestor was determined in get_rel_sync_entry() was incorrect,
simply updating the same variable. So with multiple publications,
replicating different ancestors, the outcome depended on the order
of publications in the list - the value from the last loop was used,
even if it wasn't the top-most ancestor.
This is a probably rare situation, as in most cases publications do
not overlap, so each partition has exactly one candidate ancestor
to replicate as and there's no ambiguity.
Fixed by tracking the "ancestor level" for each publication, and
picking the top-most ancestor. Adds a test case, verifying the
correct ancestor is used for publishing the changes and that this
does not depend on order of publications in the list.
Older releases have another bug in this loop - once all actions are
replicated, the loop is terminated, on the assumption that inspecting
additional publications is unecessary. But that misses the fact that
those additional applications may replicate different ancestors.
Fixed by removal of this break condition. We might still terminate the
loop in some cases (e.g. when replicating all actions and the ancestor
is the partition root).
Backpatch to 13, where publish_via_partition_root was introduced.
Initial report and fix by me, test added by Hou zj. Reviews and
improvements by Amit Kapila.
Author: Tomas Vondra, Hou zj, Amit Kapila
Reviewed-by: Amit Kapila, Hou zj
Discussion: https://postgr.es/m/d26d24dd-2fab-3c48-0162-2b7f84a9c893%40enterprisedb.com
The mkdir command in the Installation from Source Short Version docs
didn't use the -p intermediate path creation parameter which likely
would cause the command to fail. At the time of writing, -p wasn't
universally available but it can now be relied upon existing. The -p
parameter is defined by POSIX, at least since posix-2004.
Reported-by: Daniel Westermann <daniel.westermann@dbi-services.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/ZR0P278MB0920263E7F2D546A33E50079D20E9@ZR0P278MB0920.CHEP278.PROD.OUTLOOK.COM
911e702077 implemented operator class parameters including the signature length
in ltree. Previously, the signature length for gist_ltree_ops was 8. Because
of bug 911e702077 the default signature length for gist_ltree_ops became 28 for
ltree 1.1 (where options method is NOT provided) and 8 for ltree 1.2 (where
options method is provided). This commit changes the default signature length
for ltree 1.1 to 8.
Existing gist_ltree_ops indexes might be corrupted in various scenarios.
Thus, we have to recommend reindexing all the gist_ltree_ops indexes after
the upgrade.
Reported-by: Victor Yegorov
Reviewed-by: Tomas Vondra, Tom Lane, Andres Freund, Nikita Glukhov
Reviewed-by: Andrew Dunstan
Author: Tomas Vondra, Alexander Korotkov
Discussion: https://postgr.es/m/17406-71e02820ae79bb40%40postgresql.org
Discussion: https://postgr.es/m/d80e0a55-6c3e-5b26-53e3-3c4f973f737c%40enterprisedb.com
Commands like ALTER TABLE SET TABLESPACE may leave files for the next
checkpoint to clean up. If such files are not removed by the time DROP
TABLESPACE is called, we request a checkpoint so that they are deleted.
However, there is presently a window before checkpoint start where new
unlink requests won't be scheduled until the following checkpoint. This
means that the checkpoint forced by DROP TABLESPACE might not remove the
files we expect it to remove, and the following ERROR will be emitted:
ERROR: tablespace "mytblspc" is not empty
To fix, add a call to AbsorbSyncRequests() just before advancing the
unlink cycle counter. This ensures that any unlink requests forwarded
prior to checkpoint start (i.e., when ckpt_started is incremented) will
be processed by the current checkpoint. Since AbsorbSyncRequests()
performs memory allocations, it cannot be called within a critical
section, so we also need to move SyncPreCheckpoint() to before
CreateCheckPoint()'s critical section.
This is an old bug, so back-patch to all supported versions.
Author: Nathan Bossart <nathandbossart@gmail.com>
Reported-by: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20220215235845.GA2665318%40nathanxps13
This caused the function to fail, as the aligned copy of the raw page
given by the function caller was not saved in the correct memory
context, which needs to be multi_call_memory_ctx in this case.
Issue introduced by 076f4d9.
Per buildfarm members sifika, mylodon and longfin. I have reproduced
that locally with macos.
Discussion: https://postgr.es/m/YjFPOtfCW6yLXUeM@paquier.xyz
Backpatch-through: 10
If we run out of space in the checkpointer sync request queue (which is
hopefully rare on real systems, but common with very small buffer pool),
we wait for it to drain. While waiting, we should report that as a wait
event so that users know what is going on, and also handle postmaster
death, since otherwise the loop might never terminate if the
checkpointer has exited.
Back-patch to 12. Although the problem exists in earlier releases too,
the code is structured differently before 12 so I haven't gone any
further for now, in the absence of field complaints.
Reported-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20220226213942.nb7uvb2pamyu26dj%40alap3.anarazel.de
This commit fixes a set of issues related to the use of the SQL
functions in this module when the caller is able to pass down raw page
data as input argument:
- The page size check was fuzzy in a couple of places, sometimes
looking after only a sub-range, but what we are looking for is an exact
match on BLCKSZ. After considering a few options here, I have settled
down to do a generalization of get_page_from_raw(). Most of the SQL
functions already used that, and this is not strictly required if not
accessing an 8-byte-wide value from a raw page, but this feels safer in
the long run for alignment-picky environment, particularly if a code
path begins to access such values. This also reduces the number of
strings that need to be translated.
- The BRIN function brin_page_items() uses a Relation but it did not
check the access method of the opened index, potentially leading to
crashes. All the other functions in need of a Relation already did
that.
- Some code paths could fail on elog(), but we should to use ereport()
for failures that can be triggered by the user.
Tests are added to stress all the cases that are fixed as of this
commit, with some junk raw pages (\set VERBOSITY ensures that this works
across all page sizes) and unexpected index types when functions open
relations.
Author: Michael Paquier, Justin Prysby
Discussion: https://postgr.es/m/20220218030020.GA1137@telsasoft.com
Backpatch-through: 10
The checkpointer shouldn't ignore its latch. Other backends may be
waiting for it to drain the request queue. Hopefully real systems don't
have a full queue often, but the condition is reached easily when
shared_buffers is small.
This involves defining a new wait event, which will appear in the
pg_stat_activity view often due to spread checkpoints.
Back-patch only to 14. Even though the problem exists in earlier
branches too, it's hard to hit there. In 14 we stopped using signal
handlers for latches on Linux, *BSD and macOS, which were previously
hiding this problem by interrupting the sleep (though not reliably, as
the signal could arrive before the sleep begins; precisely the problem
latches address).
Reported-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20220226213942.nb7uvb2pamyu26dj%40alap3.anarazel.de