It's not immediately obvious what you have to do to get "make
installcheck" to work here, so document that along the same lines
as we've used elsewhere.
It does not work on all versions of perl across all platforms.
To avoid endian-ness issues, pick a new value for column a
that has the same upper 4 bytes as lower 4 bytes. Try to
make it something that isn't likely to occur anywhere nearby
in the page.
Discussion: http://postgr.es/m/29DA079B-0658-4E66-BDAA-0EFD7B64D9C6@enterprisedb.com
Don't try to compile src/port/pthread_barrier_wait.c if we opted out of
threads at configure time. Revealed by build farm member gaur, which
can't compile this code because of problems with its pthread
implementation. It shouldn't be trying to, because it's using
--disable-thread-safety.
Defect in commit 44bf3d50.
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/2568537.1615603606%40sss.pgh.pa.us
Commit 05c8482f7f added special logic related to parallel-safety of FK
triggers. This is a bit of a hack and should have instead been done by
simply setting appropriate proparallel values on those trigger functions
themselves.
Suggested-by: Tom Lane
Author: Greg Nancarrow
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/2309260.1615485644@sss.pgh.pa.us
Commit 24189277f6ff3169b15c7bc82926a372ca7f2dbf managed to remove
one of the two places where we were checking for a "no such user"
error while leaving the other one right next to it. So remove that
too. In fact, remove the entire test, because the whole point of
this test was to see which message we got on a failure.
It's hard to believe, but buildfarm results from the new pg_amcheck
suggest that command_checks_all() perform shell expansion on some
machines but not others, apparently due to an underlying behavior
difference in IPC::Run. Let's see if we can work around that - and
confirm that it is the real problem - by passing '-S*' as a single
argument rather than '-S' and '*' as two separate ones.
Failures were observed on jacana and hoverfly.
Mark Dilger
Discussion: http://postgr.es/m/9E76E46A-48B2-4869-BD0C-422204C1F767@enterprisedb.com
Test #12 overwrote a 1-byte varlena header to make it look like the
initial byte of a 4-byte varlena header, but the results were
endian-dependent. Also, the byte "abc" that followed the overwritten
byte would be interpreted differently depending on endian-ness.
Overwrite 4 bytes instead, in an endian-aware manner.
Test #13 accidentally managed to depend on TOAST_MAX_CHUNK_SIZE,
which varies slightly depending on MAXIMUM_ALIGNOF. That's not
the point anyway, so make the regexp insensitive to the expected
number of chunks.
Mark Dilger
Discussion: http://postgr.es/m/A80D68F6-E38F-482D-9522-E2FB6AAFE8A1@enterprisedb.com
Simplify _bt_vacuum_needs_cleanup() functions's signature (it only needs
a single 'rel' argument now), and move it next to its sibling function
in nbtpage.c.
I believe that _bt_vacuum_needs_cleanup() was originally located in
nbtree.c due to an include dependency issue. That's no longer an issue.
Follow-up to commit 9f3665fb.
psql's editing commands decide whether the user has edited the file
by checking for change of modification timestamp. This is probably
fine for a pre-existing file, but with a temporary file that is
created within the command, it's possible for a fast typist to
save-and-exit in less than the one-second granularity of stat(2)
timestamps. On Windows FAT filesystems the granularity is even
worse, 2 seconds, making the race a bit easier to hit.
To fix, try to set the temp file's mod time to be two seconds ago.
It's unlikely this would fail, but then again the race condition
itself is unlikely, so just ignore any error.
Also, we might as well check the file size as well as its mod time.
While this is a difficult bug to hit, it still seems worth
back-patching, to ensure that users' edits aren't lost.
Laurenz Albe, per gripe from Jacob Champion; based on fix suggestions
from Jacob and myself
Discussion: https://postgr.es/m/0ba3f2a658bac6546d9934ab6ba63a805d46a49b.camel@cybertec.at
GENERATED ALWAYS AS IDENTITY implies NOT NULL, but the code failed
to complain if you overrode that with "GENERATED ALWAYS AS IDENTITY
NULL". One might think the old behavior was a feature, but it was
inconsistent because the outcome varied depending on the order of
the clauses, so it seems to have been just an oversight.
Per bug #16913 from Pavel Boev. Back-patch to v10 where identity
columns were introduced.
Vik Fearing (minor tweaks by me)
Discussion: https://postgr.es/m/16913-3b5198410f67d8c6@postgresql.org
The commit added code which used a relcache TriggerDesc field across
another cache access, which it shouldn't because the relcache doesn't
guarantee it won't get moved.
Diagnosed-by: Tom Lane
Author: Greg Nancarrow
Reviewed-by: Hou Zhijie, Amit Kapila
Discussion: https://postgr.es/m/2309260.1615485644@sss.pgh.pa.us
Since commits 9f095299 and f98b8476 we don't poll the postmaster
pipe at all during crash recovery on Linux and FreeBSD, but on other
operating systems we were still doing it for every WAL record. Do it
less frequently on operating systems where system calls are required, at
the cost of delaying exit a bit after postmaster death. This avoids
expensive system calls reported to slow down CPU-bound recovery by as
much as 10-30%.
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/CA%2BhUKGK1607VmtrDUHQXrsooU%3Dap4g4R2yaoByWOOA3m8xevUQ%40mail.gmail.com
Discussion: https://postgr.es/m/7261eb39-0369-f2f4-1bb5-62f3b6083b5e@iki.fi
When shutdown is requested, checkpointer performs checkpoint or
restartpoint, and updates the statistics, before it exits. But previously
checkpointer didn't send those statistics to the stats collector.
Shutdown checkpoint and restartpoint are treated as requested ones
instead of scheduled ones, so the number of them are counted in
pg_stat_bgwriter.checkpoints_req column.
Author: Masahiro Ikeda
Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/0509ad67b585a5b86a83d445dfa75392@oss.nttdata.com
In walwriter's main loop, WAL stats message is only sent if enough time
has passed since last one was sent to reach PGSTAT_STAT_INTERVAL msecs.
This is necessary to avoid overloading to the stats collector. But this
can cause recent WAL stats to be unsent when walwriter exits.
To ensure that all the WAL stats are sent, this commit makes walwriter
force to send remaining WAL stats to the collector when it exits because
of shutdown request. Note that those remaining WAL stats can still be
unsent when walwriter exits with non-zero exit code (e.g., FATAL error).
This is OK because that walwriter exit leads to server crash and
subsequent recovery discards all the stats. So there is no need to send
remaining stats in that case.
Author: Masahiro Ikeda
Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/0509ad67b585a5b86a83d445dfa75392@oss.nttdata.com
Avoid calling RelationGetNumberOfBlocks() unnecessarily in the common
case where there are no deleted but not yet recycled pages to recycle
during a cleanup-only nbtree VACUUM operation.
Follow-up to commit e5d8a999, which (among other things) taught the
"skip full scan" nbtree VACUUM mechanism to only trigger a full index
scan when the absolute number of deleted pages in the index is
considered excessive.
Commit 9f3665fb removed the vacuum_cleanup_index_scale_factor storage
parameter. However, that creates dump/reload hazards when moving across
major versions.
Add back the vacuum_cleanup_index_scale_factor parameter (though not the
GUC of the same name) purely to avoid problems when using tools like
pg_upgrade. The parameter remains disabled and undocumented.
No backpatch to Postgres 13, since vacuum_cleanup_index_scale_factor was
only disabled by REL_13_STABLE's version of master branch commit
9f3665fb in the first place -- the parameter already looks like this on
REL_13_STABLE.
Discussion: https://postgr.es/m/YEm/a3Ko3nKnBuVq@paquier.xyz
Previously, the code and documentation seem to have essentially
assumed than a call to pg_wal_replay_pause() would take place
immediately, but that's not the case, because we only check for a
pause in certain places. This means that a tool that uses this
function and then wants to do something else afterward that is
dependent on the pause having taken effect doesn't know how long it
needs to wait to be sure that no more WAL is going to be replayed.
To avoid that, add a new function pg_get_wal_replay_pause_state()
which returns either 'not paused', 'paused requested', or 'paused'.
After calling pg_wal_replay_pause() the status will immediate change
from 'not paused' to 'pause requested'; when the startup process
has noticed this, the status will change to 'pause'. For backward
compatibility, pg_is_wal_replay_paused() still exists and returns
the same thing as before: true if a pause has been requested,
whether or not it has taken effect yet; and false if not.
The documentation is updated to clarify.
To improve the changes that a pause request is quickly confirmed
effective, adjust things so that WaitForWALToBecomeAvailable will
swiftly reach a call to recoveryPausesHere() when a pause request
is made.
Dilip Kumar, reviewed by Simon Riggs, Kyotaro Horiguchi, Yugo Nagata,
Masahiko Sawada, and Bharath Rupireddy.
Discussion: http://postgr.es/m/CAFiTN-vcLLWEm8Zr%3DYK83rgYrT9pbC8VJCfa1kY9vL3AUPfu6g%40mail.gmail.com
Commit 92785dac2 copied some logic related to advancement of inStart
from pqParseInput3 into getRowDescriptions and getAnotherTuple,
because it wanted to allow user-defined row processor callbacks to
potentially longjmp out of the library, and inStart would have to be
updated before that happened to avoid an infinite loop. We later
decided that that API was impossibly fragile and reverted it, but
we didn't undo all of the related code changes, and this bit of
messiness survived. Undo it now so that there's just one place in
pqParseInput3's processing where inStart is advanced; this will
simplify addition of better tracing support.
getParamDescriptions had grown similar processing somewhere along
the way (not in 92785dac2; I didn't track down just when), but it's
actually buggy because its handling of corrupt-message cases seems to
have been copied from the v2 logic where we lacked a known message
length. The cases where we "goto not_enough_data" should not simply
return EOF, because then we won't consume the message, potentially
creating an infinite loop. That situation now represents a
definitively corrupt message, and we should report it as such.
Although no field reports of getParamDescriptions getting stuck in
a loop have been seen, it seems appropriate to back-patch that fix.
I chose to back-patch all of this to keep the logic looking more alike
in supported branches.
Discussion: https://postgr.es/m/2217283.1615411989@sss.pgh.pa.us
Create a wrapper object, ParallelSlotArray, to encapsulate the
number of slots and the slot array itself, plus some other relevant
bits of information. This reduces the number of parameters we have
to pass around all over the place.
Allow for a ParallelSlotArray to contain slots connected to
different databases within a single cluster. The current clients
of this mechanism don't need this, but it is expected to be used
by future patches.
Defer connecting to databases until we actually need the connection
for something. This is a slight behavior change for vacuumdb and
reindexdb. If you specify a number of jobs that is larger than the
number of objects, the extra connections will now not be used.
But, on the other hand, if you specify a number of jobs that is
so large that it's going to fail, the failure would previously have
happened before any operations were actually started, and now it
won't.
Mark Dilger, reviewed by me.
Discussion: http://postgr.es/m/12ED3DA8-25F0-4B68-937D-D907CFBF08E7@enterprisedb.com
Discussion: http://postgr.es/m/BA592F2D-F928-46FF-9516-2B827F067F57@enterprisedb.com
Based on an analysis of the OpenSSL code with Jacob, moving to EVP for
the cryptohash computations makes necessary the setup of the libcrypto
callbacks that were getting set only for SSL connections, but not for
connections without SSL. Not setting the callbacks makes the use of
threads potentially unsafe for connections calling cryptohashes during
authentication, like MD5 or SCRAM, if a failure happens during a
cryptohash computation. The logic setting the libssl and libcrypto
states is then split into two parts, both using the same locking, with
libcrypto being set up for SSL and non-SSL connections, while SSL
connections set any libssl state afterwards as needed.
Prior to this commit, only SSL connections would have set libcrypto
callbacks that are necessary to ensure a proper thread locking when
using multiple concurrent threads in libpq (ENABLE_THREAD_SAFETY). Note
that this is only required for OpenSSL 1.0.2 and 1.0.1 (oldest version
supported on HEAD), as 1.1.0 has its own internal locking and it has
dropped support for CRYPTO_set_locking_callback().
Tests with up to 300 threads with OpenSSL 1.0.1 and 1.0.2, mixing SSL
and non-SSL connection threads did not show any performance impact after
some micro-benchmarking. pgbench can be used here with -C and a
mostly-empty script (with one \set meta-command for example) to stress
authentication requests, and we have mixed that with some custom
programs for testing.
Reported-by: Jacob Champion
Author: Michael Paquier
Reviewed-by: Jacob Champion
Discussion: https://postgr.es/m/fd3ba610085f1ff54623478cf2f7adf5af193cbb.camel@vmware.com
Add a note that per-buffer I/O condition variables currently live
outside the BufferDesc struct. Follow-up for commit d8725104.
Reported-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://postgr.es/m/20210311031118.hucytmrgwlktjxgq%40nol
vacuumlazy.c sometimes fails to update pg_class entries for each index
(to ensure that pg_class.reltuples is current), even though analyze.c
assumed that that must have happened during VACUUM ANALYZE. There are
at least a couple of reasons for this. For example, vacuumlazy.c could
fail to update pg_class when the index AM indicated that its statistics
are merely an estimate, per the contract for amvacuumcleanup() routines
established by commit e57345975cf back in 2006.
Stop assuming that pg_class must have been updated with accurate
statistics within VACUUM ANALYZE -- update pg_class for indexes at the
same time as the table relation in all cases. That way VACUUM ANALYZE
will never fail to keep pg_class.reltuples reasonably accurate.
The only downside of this approach (compared to the old approach) is
that it might inaccurately set pg_class.reltuples for indexes whose heap
relation ends up with the same inaccurate value anyway. This doesn't
seem too bad. We already consistently called vac_update_relstats() (to
update pg_class) for the heap/table relation twice during any VACUUM
ANALYZE -- once in vacuumlazy.c, and once in analyze.c. We now make
sure that we call vac_update_relstats() at least once (though often
twice) for each index.
This is follow up work to commit 9f3665fb, which dealt with issues in
btvacuumcleanup(). Technically this fixes an unrelated issue, though.
btvacuumcleanup() no longer provides an accurate num_index_tuples value
following commit 9f3665fb (when there was no btbulkdelete() call during
the VACUUM operation in question), but hashvacuumcleanup() has worked in
the same way for many years now.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/CAH2-WzknxdComjhqo4SUxVFk_Q1171GJO2ZgHZ1Y6pion6u8rA@mail.gmail.com
Backpatch: 13-, just like commit 9f3665fb.
Remove the entire idea of "stale stats" within nbtree VACUUM (stop
caring about stats involving the number of inserted tuples). Also
remove the vacuum_cleanup_index_scale_factor GUC/param on the master
branch (though just disable them on postgres 13).
The vacuum_cleanup_index_scale_factor/stats interface made the nbtree AM
partially responsible for deciding when pg_class.reltuples stats needed
to be updated. This seems contrary to the spirit of the index AM API,
though -- it is not actually necessary for an index AM's bulk delete and
cleanup callbacks to provide accurate stats when it happens to be
inconvenient. The core code owns that. (Index AMs have the authority
to perform or not perform certain kinds of deferred cleanup based on
their own considerations, such as page deletion and recycling, but that
has little to do with pg_class.reltuples/num_index_tuples.)
This issue was fairly harmless until the introduction of the
autovacuum_vacuum_insert_threshold feature by commit b07642db, which had
an undesirable interaction with the vacuum_cleanup_index_scale_factor
mechanism: it made insert-driven autovacuums perform full index scans,
even though there is no real benefit to doing so. This has been tied to
a regression with an append-only insert benchmark [1].
Also have remaining cases that perform a full scan of an index during a
cleanup-only nbtree VACUUM indicate that the final tuple count is only
an estimate. This prevents vacuumlazy.c from setting the index's
pg_class.reltuples in those cases (it will now only update pg_class when
vacuumlazy.c had TIDs for nbtree to bulk delete). This arguably fixes
an oversight in deduplication-related bugfix commit 48e12913.
[1] https://smalldatum.blogspot.com/2021/01/insert-benchmark-postgres-is-still.html
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/CAD21AoA4WHthN5uU6+WScZ7+J_RcEjmcuH94qcoUPuB42ShXzg@mail.gmail.com
Backpatch: 13-, where autovacuum_vacuum_insert_threshold was added.
1. Backends waiting for buffer I/O are now interruptible.
2. If something goes wrong in a backend that is currently performing
I/O, waiting backends no longer wake up until that backend reaches
AbortBufferIO() and broadcasts on the CV. Previously, any waiters would
wake up (because the I/O lock was automatically released) and then
busy-loop until AbortBufferIO() cleared BM_IO_IN_PROGRESS.
3. LWLockMinimallyPadded is removed, as it would now be unused.
Author: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> (earlier version, 2016)
Discussion: https://postgr.es/m/CA%2BhUKGJ8nBFrjLuCTuqKN0pd2PQOwj9b_jnsiGFFMDvUxahj_A%40mail.gmail.com
Discussion: https://postgr.es/m/CA+Tgmoaj2aPti0yho7FeEf2qt-JgQPRWb0gci_o1Hfr=C56Xng@mail.gmail.com
When a foreign key constraint is applied to a partitioned table, each
leaf partition inherits a similar FK constraint. We were processing all
of those constraints independently, meaning that in large partitioning
trees we'd build up large collections of cached FK-checking query plans.
However, in all cases but one, the generated queries are actually
identical for all members of the inheritance tree (because, in most
cases, the query only mentions the topmost table of the other side of
the FK relationship). So we can share a single cached plan among all
the partitions, saving memory, not to mention time to build and maintain
the cached plans.
Keisuke Kuroda and Amit Langote
Discussion: https://postgr.es/m/cab4b85d-9292-967d-adf2-be0d803c3e23@nttcom.co.jp_1
We italicized some, but not all, instances of "per se", "pro forma", and
"ad hoc". These phrases are widespread in formal registers of English,
so it"s debatable whether they even qualify as foreign. We could instead
try to be more consistent in the use of <foreignphrase>, but that"s
difficult to enforce, so let"s just remove the tags for those words.
The one case that seems to deserve the tag is "voilà". Instead of keeping
just one instance of the tag, change that to a more standard phrase.
John Naylor
Discussion: https://postgr.es/m/CAFBsxsHtWs_NsccAVgQ=tTUKkXHpHdkjZXtp_Cd9dGWyBDxfbQ@mail.gmail.com
Clarify the discussion in "User-Defined Procedures", by laying out
the key differences between functions and procedures in a bulleted
list. Notably, this avoids burying the lede about procedures being
able to do transaction control. Make the back-link in the CREATE
FUNCTION reference page more prominent, and add one in CREATE
PROCEDURE.
Per gripe from Guyren Howe. Thanks to David Johnston for discussion.
Discussion: https://postgr.es/m/BYAPR03MB4903C53A8BB7EFF5EA289674A6949@BYAPR03MB4903.namprd03.prod.outlook.com
In the current lazy vacuum implementation, some index AMs such as
btree indexes call lazy_tid_reaped() for each index tuple during
ambulkdelete to check if the index tuple points to the (collected)
garbage tuple. In that function, we simply call bsearch(), but we
should be able to know the result without bsearch() if the index tuple
points to the heap tuple that is out of range of the collected garbage
tuples. Therefore, add a simple bound check before resorting to
bsearch(). Testing has shown that this can give significant
performance benefits.
Author: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>
Discussion: https://www.postgresql.org/message-id/flat/CA+fd4k76j8jKzJzcx8UqEugvayaMSnQz0iLUt_XgBp-_-bd22A@mail.gmail.com
Commit 547f04e7 produced errors on AIX/xlc while building plpython. The
new code appears to be incompatible with the hack installed by commit
a11cf433. Without access to an AIX system to check, my guess is that
_POSIX_C_SOURCE may be required for <time.h> to declare the things the
header needs to see, but plpython.h undefines it.
For now, to unbreak build farm animal hoverfly, just move the new
pg_time_usec_t support into pgbench.c. Perhaps later we could figure
out what to rearrange to put it back into a header for wider use.
Discussion: https://postgr.es/m/CA%2BhUKG%2BP%2BjcD%3Dx9%2BagyTdWtjpOT64MYiGic%2Bcbu_TD8CV%3D6A3w%40mail.gmail.com
1. pg_time_usec_t needs to be printed with INT64_FORMAT, not %ld, or 32
bit systems complain, per lapwing.
2. Some Windows compilers didn't like a thread function not marked with
__stdcall, per whelk; let's see if this fixes the problem.
Moving this logic into pg_regress fixes a potential failure with
parallel tests when pg_upgrade and the main regression test suite both
trigger the makefile rule that cleaned up testtablespace/ under
src/test/regress. Even if pg_upgrade was triggering this rule, it has
no need to do so as it uses a different tablespace path. So if
pg_upgrade triggered the makefile rule for the tablespace setup while
the main regression test suite ran the tablespace cases, it would fail.
61be85a was a similar attempt at achieving that, but that broke cases
where the regression tests require to run under an Administrator
account, like with Appveyor.
Reported-by: Andres Freund, Kyotaro Horiguchi
Reviewed-by: Peter Eisentraut
Discussion: https://postgr.es/m/20201209012911.uk4d6nxcnkp7ehrx@alap3.anarazel.de
Wait until all pgbench threads are connected before benchmarking begins.
This fixes a problem where some connections could take a very long time
to be established because of lock contention from earlier connections,
making results unstable and bogus with high connection counts.
Author: Andres Freund <andres@anarazel.de>
Author: Fabien COELHO <coelho@cri.ensmp.fr>
Reviewed-by: Marina Polyakova <m.polyakova@postgrespro.ru>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: David Rowley <dgrowleyml@gmail.com>
Discussion: https://postgr.es/m/20200227180100.zyvjwzcpiokfsqm2%40alap3.anarazel.de
Instead of instr_time (struct timespec) and the INSTR_XXX macros,
introduce pg_time_usec_t and use integer arithmetic. Don't include the
connection time in TPS unless using -C mode, but report it separately.
Author: Fabien COELHO <coelho@cri.ensmp.fr>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Discussion: https://postgr.es/m/20200227180100.zyvjwzcpiokfsqm2%40alap3.anarazel.de