1
0
mirror of https://github.com/postgres/postgres.git synced 2025-07-02 09:02:37 +03:00
Commit Graph

1836 Commits

Author SHA1 Message Date
bb466c6b09 Prohibit map and grep in void context
map and grep are not intended to be used as mutators, iterating
with side-effects should be done with for or foreach loops. This
fixes the one occurrence of the pattern, and bumps the perlcritic
policy to severity 5 for the map and grep policies.

Author: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://postgr.es/m/87fsvzhhc4.fsf@wibble.ilmari.org
2021-08-31 11:07:04 +02:00
dcac5e7ac1 Refactor sharedfileset.c to separate out fileset implementation.
Move fileset related implementation out of sharedfileset.c to allow its
usage by backends that don't want to share filesets among different
processes. After this split, fileset infrastructure is used by both
sharedfileset.c and worker.c for the named temporary files that survive
across transactions.

Author: Dilip Kumar, based on suggestion by Andres Freund
Reviewed-by: Hou Zhijie, Masahiko Sawada, Amit Kapila
Discussion: https://postgr.es/m/E1mCC6U-0004Ik-Fs@gemulon.postgresql.org
2021-08-30 08:48:15 +05:30
abc0910e2e Add logical change details to logical replication worker errcontext.
Previously, on the subscriber, we set the error context callback for the
tuple data conversion failures. This commit replaces the existing error
context callback with a comprehensive one so that it shows not only the
details of data conversion failures but also the details of logical change
being applied by the apply worker or table sync worker. The additional
information displayed will be the command, transaction id, and timestamp.

The error context is added to an error only when applying a change but not
while doing other work like receiving data etc.

This will help users in diagnosing the problems that occur during logical
replication. It also can be used for future work that allows skipping a
particular transaction on the subscriber.

Author: Masahiko Sawada
Reviewed-by: Hou Zhijie, Greg Nancarrow, Haiying Tang, Amit Kapila
Tested-by: Haiying Tang
Discussion: https://postgr.es/m/CAD21AoDeScrsHhLyEPYqN3sydg6PxAPVBboK=30xJfUVihNZDA@mail.gmail.com
2021-08-27 08:30:23 +05:30
2576dcfb76 Revert refactoring of hex code to src/common/
This is a combined revert of the following commits:
- c3826f8, a refactoring piece that moved the hex decoding code to
src/common/.  This code was cleaned up by aef8948, as it originally
included no overflow checks in the same way as the base64 routines in
src/common/ used by SCRAM, making it unsafe for its purpose.
- aef8948, a more advanced refactoring of the hex encoding/decoding code
to src/common/ that added sanity checks on the result buffer for hex
decoding and encoding.  As reported by Hans Buschmann, those overflow
checks are expensive, and it is possible to see a performance drop in
the decoding/encoding of bytea or LOs the longer they are.  Simple SQLs
working on large bytea values show a clear difference in perf profile.
- ccf4e27, a cleanup made possible by aef8948.

The reverts of all those commits bring back the performance of hex
decoding and encoding back to what it was in ~13.  Fow now and
post-beta3, this is the simplest option.

Reported-by: Hans Buschmann
Discussion: https://postgr.es/m/1629039545467.80333@nidsa.net
Backpatch-through: 14
2021-08-19 09:20:13 +09:00
ba958299ea Speed up generation of Unicode hash functions.
Sets of Unicode keys are picky about the primes used when generating
a perfect hash function for them. Callers can spend many seconds
iterating through all the possible combinations of candidate
multipliers and seeds to find one that works.

Unicode updates typically happen only once a year, but it still makes
development and testing of Unicode scripts unnecessarily slow. To fix,
iterate over the primes in the innermost loop. This does not change
any existing functions checked into the tree.
2021-08-12 09:08:56 -04:00
76ad24400d Remove some special cases from MSVC build scripts
Here we add additional parsing of Makefiles to determine when to add
references to libpgport and libpgcommon.  We also remove the need for
adding the current contrib_extrasource by adding sine very basic logic to
implement the Makefile rules which add .l and .y files when they exist for
a given .o file in the Makefile.

This is just some very basic additional parsing of Makefiles to try to
keep things more consistent between builds using make and MSVC builds.
This happens to work with how our current Makefiles are laid out, but it
could easily be broken in the future if someone chooses do something in
the Makefile that we don't have parsing support for.  We will cross that
bridge when we come to it.

Author: David Rowley
Discussion: https://postgr.es/m/CAApHDvoPULi5JW3933NxgwxOmu9Ncvpcyt87UhEHAUX16QqmpA@mail.gmail.com
2021-08-09 19:45:26 +12:00
245de48455 Adjust MSVC build scripts to parse Makefiles for defines
This adjusts the MSVC build scripts to look at the compile flags mentioned
in the Makefile to look for -D arguments in order to determine which
constants should be defined in Visual Studio builds.

One small anomaly that appeared as a result of this change is that the
Makefile for the ltree contrib module defined LOWER_NODE, but this was
not properly defined in the MSVC build scripts.  This meant that MSVC
builds would differ in case sensitivity in the ltree module when
compared to builds using a make build environment.  To maintain the same
behavior here we remove the -DLOWER_NODE from the Makefile and just always
define it in ltree.h for non-MSVC builds.  We need to maintain the old
behavior here as this affects the on-disk compatibility of GiST indexes
when using the ltree type.

The only other resulting change here is that REFINT_VERBOSE is now defined
for the autoinc, insert_username and moddatetime contrib modules.
Previously on MSVC, this was only defined for the refint module.  This
aligns the behavior to build environments using make as all 4 of these
modules share the same Makefile.

Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAApHDvpo6g5csCTjc_0C7DMvgFPomVb0Rh-AcW5afd=Ya=LRuw@mail.gmail.com
2021-07-29 12:01:23 +12:00
15f16ec651 Don't duplicate references and libraries in MSVC scripts
In order not to duplicate references and libraries in the Visual Studio
project files produced by the MSVC build scripts, have them check if a
particular reference or library already exists before adding the same one
again.

Reviewed-by: Álvaro Herrera, Andrew Dunstan, Dagfinn Ilmari Mannsåker
Discussion: https://postgr.es/m/CAApHDvpo6g5csCTjc_0C7DMvgFPomVb0Rh-AcW5afd=Ya=LRuw@mail.gmail.com
2021-07-29 10:41:31 +12:00
33d74c5d00 Make the includes field an array in MSVC build scripts
Previously the 'includes' field was a string.  It's slightly nicer to
manage this when it's defined as an array instead. This allows us to
more easily detect and eliminate duplicates.

Reviewed-by: Álvaro Herrera, Andrew Dunstan, Dagfinn Ilmari Mannsåker
Discussion: https://postgr.es/m/CAApHDvpo6g5csCTjc_0C7DMvgFPomVb0Rh-AcW5afd=Ya=LRuw@mail.gmail.com
2021-07-29 10:14:25 +12:00
ed1884a2fe Use the AddFile function consistently in MSVC build scripts
We seem to be using a mix of manually adding to the 'files' hash and
calling the Addfile() method.  Let's just consistently use AddFile().

Reviewed-by: Dagfinn Ilmari Mannsåker
Discussion: https://postgr.es/m/CAApHDvpo6g5csCTjc_0C7DMvgFPomVb0Rh-AcW5afd=Ya=LRuw@mail.gmail.com
2021-07-28 23:43:40 +12:00
4b763ff642 Remove seemingly unneeded include directory in MSVC scripts
This appears to have been added way back in ee3b4188a but it's a little
unclear why the change made in that commit is even needed given that
320c7eb8c, dated 18 months earlier, added code to copy fmgroids.h to
src/include/utils.

amcheck seems to get away without adding the additional include directory,
so perhaps dblink can get away with it too.

This builds ok in my VS2017 environment, but the buildfarm may serve as a
reminder about why ee3b4188a was required.  There's only one way to find
out for sure.

Discussion: https://postgr.es/m/CAApHDvpo6g5csCTjc_0C7DMvgFPomVb0Rh-AcW5afd=Ya=LRuw@mail.gmail.com
2021-07-28 12:56:43 +12:00
e529b2dc37 Ensure HAVE_DECL_XXX macros in MSVC builds match those in Unix.
Autoconf's AC_CHECK_DECLS() always defines HAVE_DECL_whatever
as 1 or 0, but some of the entries in msvc/Solution.pm showed
such symbols as "undef" instead of 0.  Fix that for consistency.
There's no live bug in current usages AFAICS, but it's not hard
to imagine one creeping in if more-complex #if tests get added.

Back-patch to v13, which is as far back as Solution.pm contains
this data.  The inconsistency still exists in the manually-filled
pg_config_ext.h.win32 files of older branches; but as long as the
problem is only latent, it doesn't seem worth the trouble to
clean things up there.

Discussion: https://postgr.es/m/3185430.1626133592@sss.pgh.pa.us
2021-07-15 11:00:43 -04:00
5865e064ab Portability fixes for sigwait.
Build farm animals running ancient HPUX and Solaris have a non-standard
sigwait() from draft versions of POSIX, so they didn't like commit
7c09d279.  To avoid the problem in general, only try to use sigwait() if
it's declared by <signal.h> and matches the expected declaration.  To
select the modern declaration on Solaris (even in non-threaded
programs), move -D_POSIX_PTHREAD_SEMANTICS into the right place to
affect all translation units.

Also fix the error checking.  Modern sigwait() doesn't set errno.

Thanks to Tom Lane for help with this.

Discussion: https://postgr.es/m/3187588.1626136248%40sss.pgh.pa.us
2021-07-15 12:34:31 +12:00
a8fd13cab0 Add support for prepared transactions to built-in logical replication.
To add support for streaming transactions at prepare time into the
built-in logical replication, we need to do the following things:

* Modify the output plugin (pgoutput) to implement the new two-phase API
callbacks, by leveraging the extended replication protocol.

* Modify the replication apply worker, to properly handle two-phase
transactions by replaying them on prepare.

* Add a new SUBSCRIPTION option "two_phase" to allow users to enable
two-phase transactions. We enable the two_phase once the initial data sync
is over.

We however must explicitly disable replication of two-phase transactions
during replication slot creation, even if the plugin supports it. We
don't need to replicate the changes accumulated during this phase,
and moreover, we don't have a replication connection open so we don't know
where to send the data anyway.

The streaming option is not allowed with this new two_phase option. This
can be done as a separate patch.

We don't allow to toggle two_phase option of a subscription because it can
lead to an inconsistent replica. For the same reason, we don't allow to
refresh the publication once the two_phase is enabled for a subscription
unless copy_data option is false.

Author: Peter Smith, Ajin Cherian and Amit Kapila based on previous work by Nikhil Sontakke and Stas Kelvich
Reviewed-by: Amit Kapila, Sawada Masahiko, Vignesh C, Dilip Kumar, Takamichi Osumi, Greg Nancarrow
Tested-By: Haiying Tang
Discussion: https://postgr.es/m/02DA5F5E-CECE-4D9C-8B4B-418077E2C010@postgrespro.ru
Discussion: https://postgr.es/m/CAA4eK1+opiV4aFTmWWUF9h_32=HfPOW9vZASHarT0UA5oBrtGw@mail.gmail.com
2021-07-14 07:33:50 +05:30
6c9c283166 Install properly fe-auth-sasl.h
The internals of the frontend-side callbacks for SASL are visible in
libpq-int.h, but the header was not getting installed.  This would cause
compilation failures for applications playing with the internals of
libpq.

Issue introduced in 9fd8557.

Author: Mikhail Kulagin
Reviewed-by: Jacob Champion
Discussion: https://postgr.es/m/05ce01d777cb$40f31d60$c2d95820$@postgrespro.ru
2021-07-14 10:37:26 +09:00
83f4fcc655 Change the name of the Result Cache node to Memoize
"Result Cache" was never a great name for this node, but nobody managed
to come up with another name that anyone liked enough.  That was until
David Johnston mentioned "Node Memoization", which Tom Lane revised to
just "Memoize".  People seem to like "Memoize", so let's do the rename.

Reviewed-by: Justin Pryzby
Discussion: https://postgr.es/m/20210708165145.GG1176@momjian.us
Backpatch-through: 14, where Result Cache was introduced
2021-07-14 12:43:58 +12:00
f014b1b9bb Probe for preadv/pwritev in a more macOS-friendly way.
Apple's mechanism for dealing with functions that are available
in only some OS versions confuses AC_CHECK_FUNCS, and therefore
AC_REPLACE_FUNCS.  We can use AC_CHECK_DECLS instead, so long as
we enable -Werror=unguarded-availability-new.  This allows people
compiling for macOS to control whether or not preadv/pwritev are
used by setting MACOSX_DEPLOYMENT_TARGET, rather than supplying
a back-rev SDK.  (Of course, the latter still works, too.)

James Hilliard

Discussion: https://postgr.es/m/20210122193230.25295-1-james.hilliard1@gmail.com
2021-07-12 19:17:35 -04:00
d0a02bdb8c Update configure's probe for libldap to work with OpenLDAP 2.5.
The separate libldap_r is gone and libldap itself is now always
thread-safe.  Unfortunately there seems no easy way to tell by
inspection whether libldap is thread-safe, so we have to take
it on faith that libldap is thread-safe if there's no libldap_r.
That should be okay, as it appears that libldap_r was a standard
part of the installation going back at least 20 years.

Report and patch by Adrian Ho.  Back-patch to all supported
branches, since people might try to build any of them with
a newer OpenLDAP.

Discussion: https://postgr.es/m/17083-a19190d9591946a7@postgresql.org
2021-07-09 12:38:55 -04:00
9fd85570d1 Refactor SASL code with a generic interface for its mechanisms
The code of SCRAM and SASL have been tightly linked together since SCRAM
exists in the core code, making hard to apprehend the addition of new
SASL mechanisms, but these are by design different facilities, with
SCRAM being an option for SASL.  This refactors the code related to both
so as the backend and the frontend use a set of callbacks for SASL
mechanisms, documenting while on it what is expected by anybody adding a
new SASL mechanism.

The separation between both layers is neat, using two sets of callbacks
for the frontend and the backend to mark the frontier between both
facilities.  The shape of the callbacks is now directly inspired from
the routines used by SCRAM, so the code change is straight-forward, and
the SASL code is moved into its own set of files.  These will likely
change depending on how and if new SASL mechanisms get added in the
future.

Author: Jacob Champion
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/3d2a6f5d50e741117d6baf83eb67ebf1a8a35a11.camel@vmware.com
2021-07-07 10:55:15 +09:00
8aafb02616 Refactor function parse_subscription_options.
Instead of using multiple parameters in parse_subscription_options
function signature, use the struct SubOpts that encapsulate all the
subscription options and their values. It will be useful for future work
where we need to add other options in the subscription. Also, use bitmaps
to pass the supported and retrieve the specified options much like the way
it is done in the commit a3dc926009.

Author: Bharath Rupireddy
Reviewed-By: Peter Smith, Amit Kapila, Alvaro Herrera
Discussion: https://postgr.es/m/CALj2ACXtoQczfNsDQWobypVvHbX2DtgEHn8DawS0eGFwuo72kw@mail.gmail.com
2021-07-06 07:46:50 +05:30
4035cd5d4e Add support for LZ4 with compression of full-page writes in WAL
The logic is implemented so as there can be a choice in the compression
used when building a WAL record, and an extra per-record bit is used to
track down if a block is compressed with PGLZ, LZ4 or nothing.

wal_compression, the existing parameter, is changed to an enum with
support for the following backward-compatible values:
- "off", the default, to not use compression.
- "pglz" or "on", to compress FPWs with PGLZ.
- "lz4", the new mode, to compress FPWs with LZ4.

Benchmarking has showed that LZ4 outclasses easily PGLZ.  ZSTD would be
also an interesting choice, but going just with LZ4 for now makes the
patch minimalistic as toast compression is already able to use LZ4, so
there is no need to worry about any build-related needs for this
implementation.

Author: Andrey Borodin, Justin Pryzby
Reviewed-by: Dilip Kumar, Michael Paquier
Discussion: https://postgr.es/m/3037310D-ECB7-4BF1-AF20-01C10BB33A33@yandex-team.ru
2021-06-29 11:17:55 +09:00
14b2ffaf7f Doc: further updates for RELEASE_CHANGES process notes.
Mention expectations for email notifications of appropriate lists
when a branch is made or retired.

(I've been doing that informally for years, but it's better to have
it written down.)
2021-06-28 18:05:46 -04:00
bc49ab3c27 Improve pgindent release workflow.
Update RELEASE_CHANGES to direct the reader towards completing the steps
outlined in the pgindent README, both as a pre-beta task and as a task
to be performed when starting a new development cycle.

This makes it less likely that somebody will miss updating the
.git-blame-ignore-revs file when running pgindent against the tree as a
routine release change task.

Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAH2-Wz=2PjF4As8dWECArsXxLKganYmQ-s0UeGqHHbHhqDKqeA@mail.gmail.com
2021-06-28 13:08:46 -07:00
596b5af1d3 Stamp HEAD as 15devel.
Let the hacking begin ...
2021-06-28 11:31:16 -04:00
e1c1c30f63 Pre branch pgindent / pgperltidy run
Along the way make a slight adjustment to
src/include/utils/queryjumble.h to avoid an unused typedef.
2021-06-28 11:05:54 -04:00
d5a2c413fc Remove non-existing variable reference in MSVC's Solution.pm
The version string is grabbed from PACKAGE_VERSION in pg_config.h in the
MSVC build since 8f4fb4c6, but an error message referenced a variable
that existed before that.  This had no consequences except if one messes
up enough with the version number of the build.

Author: Anton Voloshin
Discussion: https://postgr.es/m/af79ee1b-9962-b299-98e1-f90a289e19e6@postgrespro.ru
Backpatch-through: 13
2021-06-26 13:52:48 +09:00
8e638845ff Add list of ignorable pgindent commits for git-blame.
Add a .git-blame-ignore-revs file with a list of pgindent, pgperlyidy,
and reformat-dat-files commit hashes.  Postgres hackers that configure
git to use the ignore file will get git-blame output that avoids
attributing line changes to the ignored indent commits.  This makes
git-blame output much easier to work with in practice.

Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAH2-Wz=cVh3GHTP6SdLU-Gnmt2zRdF8vZkcrFdSzXQ=WhbWm9Q@mail.gmail.com
2021-06-22 09:06:32 -07:00
d69fcb9cae fix syntax error 2021-05-28 09:35:11 -04:00
fb424ae85f Report configured port in MSVC built pg_config
This is a long standing omission, discovered when trying to write code
that relied on it.

Backpatch to all live branches.
2021-05-28 09:30:16 -04:00
0251106634 Fix MSVC scripts when building with GSSAPI/Kerberos
The deliverables of upstream Kerberos on Windows are installed with
paths that do not match our MSVC scripts.  First, the include folder was
named "inc/" in our scripts, but the upstream MSIs use "include/".
Second, the build would fail with 64-bit environments as the libraries
are named differently.

This commit adjusts the MSVC scripts to be compatible with the latest
installations of upstream, and I have checked that the compilation was
able to work with the 32-bit and 64-bit installations.

Special thanks to Kondo Yuta for the help in investigating the situation
in hamerkop, which had an incorrect configuration for the GSS
compilation.

Reported-by: Brian Ye
Discussion: https://postgr.es/m/162128202219.27274.12616756784952017465@wrigleys.postgresql.org
Backpatch-through: 9.6
2021-05-27 20:11:00 +09:00
413c1ef98e Avoid creating testtablespace directories where not wanted.
Recently we refactored things so that pg_regress makes the
"testtablespace" subdirectory used by the core regression tests,
instead of doing that in the makefiles.  That had the undesirable
side effect of making such a subdirectory in every directory that
has "input" or "output" test files.  Since these subdirectories
remain empty, git doesn't complain about them, but nonetheless
they're clutter.

To fix, invent an explicit --make-testtablespace-dir switch,
so that pg_regress only makes the subdirectory when explicitly
told to.

Discussion: https://postgr.es/m/2854388.1621284789@sss.pgh.pa.us
2021-05-19 14:04:01 -04:00
def5b065ff Initial pgindent and pgperltidy run for v14.
Also "make reformat-dat-files".

The only change worthy of note is that pgindent messed up the formatting
of launcher.c's struct LogicalRepWorkerId, which led me to notice that
that struct wasn't used at all anymore, so I just took it out.
2021-05-12 13:14:10 -04:00
0b85fa93e4 Fix vcregress.pl's ancient misspelling of --max-connections.
I copied the existing spelling of "--max_connections", but
that's just wrong :-(.  Evidently setting $ENV{MAX_CONNECTIONS}
has never actually worked in this script.  Given the lack of
complaints, it's probably not worth back-patching a fix.

Per buildfarm.

Discussion: https://postgr.es/m/899209.1620759506@sss.pgh.pa.us
2021-05-11 19:17:07 -04:00
1df3555acc Get rid of the separate serial_schedule list of tests.
Having to maintain two lists of regression test scripts has proven
annoyingly error-prone.  We can achieve the effect of the
serial_schedule by running the parallel_schedule with
"--max_connections=1"; so do that and remove serial_schedule.

This causes cosmetic differences in the progress output, but it
doesn't seem worth restructuring pg_regress to avoid that.

Discussion: https://postgr.es/m/899209.1620759506@sss.pgh.pa.us
2021-05-11 17:52:13 -04:00
9ca40dcd4d Add support for LZ4 build in MSVC scripts
Since its introduction in bbe0a81, compression of table data supports
LZ4, but nothing had been done within the MSVC scripts to allow users to
build the code with this library.

This commit closes the gap by extending the MSVC scripts to be able to
build optionally with LZ4.  Getting libraries that can be used for
compilation and execution is possible as LZ4 can be compiled down to
MSVC 2010 using its source tarball.  MinGW may require extra efforts to
be able to work, and I have been able to test this only with MSVC, still
this is better than nothing to give users a way to test the feature on
Windows.

Author: Dilip Kumar
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/YJPdNeF68XpwDDki@paquier.xyz
2021-05-11 10:43:05 +09:00
c2dc19342e Revert recovery prefetching feature.
This set of commits has some bugs with known fixes, but at this late
stage in the release cycle it seems best to revert and resubmit next
time, along with some new automated test coverage for this whole area.

Commits reverted:

dc88460c: Doc: Review for "Optionally prefetch referenced data in recovery."
1d257577: Optionally prefetch referenced data in recovery.
f003d9f8: Add circular WAL decoding buffer.
323cbe7c: Remove read_page callback from XLogReader.

Remove the new GUC group WAL_RECOVERY recently added by a55a9847, as the
corresponding section of config.sgml is now reverted.

Discussion: https://postgr.es/m/CAOuzzgrn7iKnFRsB4MHp3UisEQAGgZMbk_ViTN4HV4-Ksq8zCg%40mail.gmail.com
2021-05-10 16:06:09 +12:00
8b82de0164 Remove extraneous newlines added by perl copyright patch 2021-05-07 11:37:37 -04:00
8fa6e6919c Add a copyright notice to perl files lacking one. 2021-05-07 10:56:14 -04:00
ec48314708 Revert per-index collation version tracking feature.
Design problems were discovered in the handling of composite types and
record types that would cause some relevant versions not to be recorded.
Misgivings were also expressed about the use of the pg_depend catalog
for this purpose.  We're out of time for this release so we'll revert
and try again.

Commits reverted:

1bf946bd: Doc: Document known problem with Windows collation versions.
cf002008: Remove no-longer-relevant test case.
ef387bed: Fix bogus collation-version-recording logic.
0fb0a050: Hide internal error for pg_collation_actual_version(<bad OID>).
ff942057: Suppress "warning: variable 'collcollate' set but not used".
d50e3b1f: Fix assertion in collation version lookup.
f24b1569: Rethink extraction of collation dependencies.
257836a7: Track collation versions for indexes.
cd6f479e: Add pg_depend.refobjversion.
7d1297df: Remove pg_collation.collversion.

Discussion: https://postgr.es/m/CA%2BhUKGLhj5t1fcjqAu8iD9B3ixJtsTNqyCCD4V0aTO9kAKAjjA%40mail.gmail.com
2021-05-07 21:10:11 +12:00
e8ce68b0b9 Doc: update RELEASE_CHANGES checklist.
Update checklist to reflect current practice:

* The platform-specific FAQ files are long gone.

* We've never routinely updated the libbind code we borrowed, either,
and there seems no reason to start now.

* Explain current practice of running pgindent twice per cycle.

Discussion: https://postgr.es/m/4038398.1620238684@sss.pgh.pa.us
2021-05-05 23:11:28 -04:00
3fa17d3771 Use HTAB for replication slot statistics.
Previously, we used to use the array of size max_replication_slots to
store stats for replication slots. But that had two problems in the cases
where a message for dropping a slot gets lost: 1) the stats for the new
slot are not recorded if the array is full and 2) writing beyond the end
of the array if the user reduces the max_replication_slots.

This commit uses HTAB for replication slot statistics, resolving both
problems. Now, pgstat_vacuum_stat() search for all the dead replication
slots in stats hashtable and tell the collector to remove them. To avoid
showing the stats for the already-dropped slots, pg_stat_replication_slots
view searches slot stats by the slot name taken from pg_replication_slots.

Also, we send a message for creating a slot at slot creation, initializing
the stats. This reduces the possibility that the stats are accumulated
into the old slot stats when a message for dropping a slot gets lost.

Reported-by: Andres Freund
Author: Sawada Masahiko, test case by Vignesh C
Reviewed-by: Amit Kapila, Vignesh C, Dilip Kumar
Discussion: https://postgr.es/m/20210319185247.ldebgpdaxsowiflw@alap3.anarazel.de
2021-04-27 09:09:11 +05:30
8ff1c94649 Allow TRUNCATE command to truncate foreign tables.
This commit introduces new foreign data wrapper API for TRUNCATE.
It extends TRUNCATE command so that it accepts foreign tables as
the targets to truncate and invokes that API. Also it extends postgres_fdw
so that it can issue TRUNCATE command to foreign servers, by adding
new routine for that TRUNCATE API.

The information about options specified in TRUNCATE command, e.g.,
ONLY, CACADE, etc is passed to FDW via API. The list of foreign tables to
truncate is also passed to FDW. FDW truncates the foreign data sources
that the passed foreign tables specify, based on those information.
For example, postgres_fdw constructs TRUNCATE command using them
and issues it to the foreign server.

For performance, TRUNCATE command invokes the FDW routine for
TRUNCATE once per foreign server that foreign tables to truncate belong to.

Author: Kazutaka Onishi, Kohei KaiGai, slightly modified by Fujii Masao
Reviewed-by: Bharath Rupireddy, Michael Paquier, Zhihong Yu, Alvaro Herrera, Stephen Frost, Ashutosh Bapat, Amit Langote, Daniel Gustafsson, Ibrar Ahmed, Fujii Masao
Discussion: https://postgr.es/m/CAOP8fzb_gkReLput7OvOK+8NHgw-RKqNv59vem7=524krQTcWA@mail.gmail.com
Discussion: https://postgr.es/m/CAJuF6cMWDDqU-vn_knZgma+2GMaout68YUgn1uyDnexRhqqM5Q@mail.gmail.com
2021-04-08 20:56:08 +09:00
1d257577e0 Optionally prefetch referenced data in recovery.
Introduce a new GUC recovery_prefetch, disabled by default.  When
enabled, look ahead in the WAL and try to initiate asynchronous reading
of referenced data blocks that are not yet cached in our buffer pool.
For now, this is done with posix_fadvise(), which has several caveats.
Better mechanisms will follow in later work on the I/O subsystem.

The GUC maintenance_io_concurrency is used to limit the number of
concurrent I/Os we allow ourselves to initiate, based on pessimistic
heuristics used to infer that I/Os have begun and completed.

The GUC wal_decode_buffer_size is used to limit the maximum distance we
are prepared to read ahead in the WAL to find uncached blocks.

Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> (parts)
Reviewed-by: Andres Freund <andres@anarazel.de> (parts)
Reviewed-by: Tomas Vondra <tomas.vondra@2ndquadrant.com> (parts)
Tested-by: Tomas Vondra <tomas.vondra@2ndquadrant.com>
Tested-by: Jakub Wartak <Jakub.Wartak@tomtom.com>
Tested-by: Dmitry Dolgov <9erthalion6@gmail.com>
Tested-by: Sait Talha Nisanci <Sait.Nisanci@microsoft.com>
Discussion: https://postgr.es/m/CA%2BhUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq%3DAovOddfHpA%40mail.gmail.com
2021-04-08 23:20:42 +12:00
ec7ffb8096 amcheck: fix multiple problems with TOAST pointer validation
First, don't perform database access while holding a buffer lock.
When checking a heap, we can validate that TOAST pointers are sane by
performing a scan on the TOAST index and looking up the chunks that
correspond to each value ID that appears in a TOAST poiner in the main
table. But, to do that while holding a buffer lock at least risks
causing other backends to wait uninterruptibly, and probably can cause
undetected and uninterruptible deadlocks.  So, instead, make a list of
checks to perform while holding the lock, and then perform the checks
after releasing it.

Second, adjust things so that we don't try to follow TOAST pointers
for tuples that are already eligible to be pruned. The TOAST tuples
become eligible for pruning at the same time that the main tuple does,
so trying to check them may lead to spurious reports of corruption,
as observed in the buildfarm. The necessary infrastructure to decide
whether or not the tuple being checked is prunable was added by
commit 3b6c1259f9, but it wasn't
actually used for its intended purpose prior to this patch.

Mark Dilger, adjusted by me to avoid a memory leak.

Discussion: http://postgr.es/m/AC5479E4-6321-473D-AC92-5EC36299FBC2@enterprisedb.com
2021-04-07 13:39:12 -04:00
8523492d4e Remove tupgone special case from vacuumlazy.c.
Retry the call to heap_prune_page() in rare cases where there is
disagreement between the heap_prune_page() call and the call to
HeapTupleSatisfiesVacuum() that immediately follows.  Disagreement is
possible when a concurrently-aborted transaction makes a tuple DEAD
during the tiny window between each step.  This was the only case where
a tuple considered DEAD by VACUUM still had storage following pruning.
VACUUM's definition of dead tuples is now uniformly simple and
unambiguous: dead tuples from each page are always LP_DEAD line pointers
that were encountered just after we performed pruning (and just before
we considered freezing remaining items with tuple storage).

Eliminating the tupgone=true special case enables INDEX_CLEANUP=off
style skipping of index vacuuming that takes place based on flexible,
dynamic criteria.  The INDEX_CLEANUP=off case had to know about skipping
indexes up-front before now, due to a subtle interaction with the
special case (see commit dd695979) -- this was a special case unto
itself.  Now there are no special cases.  And so now it won't matter
when or how we decide to skip index vacuuming: it won't affect how
pruning behaves, and it won't be affected by any of the implementation
details of pruning or freezing.

Also remove XLOG_HEAP2_CLEANUP_INFO records.  These are no longer
necessary because we now rely entirely on heap pruning taking care of
recovery conflicts.  There is no longer any need to generate recovery
conflicts for DEAD tuples that pruning just missed.  This also means
that heap vacuuming now uses exactly the same strategy for recovery
conflicts as index vacuuming always has: REDO routines never need to
process a latestRemovedXid from the WAL record, since earlier REDO of
the WAL record from pruning is sufficient in all cases.  The generic
XLOG_HEAP2_CLEAN record type is now split into two new record types to
reflect this new division (these are called XLOG_HEAP2_PRUNE and
XLOG_HEAP2_VACUUM).

Also stop acquiring a super-exclusive lock for heap pages when they're
vacuumed during VACUUM's second heap pass.  A regular exclusive lock is
enough.  This is correct because heap page vacuuming is now strictly a
matter of setting the LP_DEAD line pointers to LP_UNUSED.  No other
backend can have a pointer to a tuple located in a pinned buffer that
can be invalidated by a concurrent heap page vacuum operation.

Heap vacuuming can now be thought of as conceptually similar to index
vacuuming and conceptually dissimilar to heap pruning.  Heap pruning now
has sole responsibility for anything involving the logical contents of
the database (e.g., managing transaction status information, recovery
conflicts, considering what to do with HOT chains).  Index vacuuming and
heap vacuuming are now only concerned with recycling garbage items from
physical data structures that back the logical database.

Bump XLOG_PAGE_MAGIC due to pruning and heap page vacuum WAL record
changes.

Credit for the idea of retrying pruning a page to avoid the tupgone case
goes to Andres Freund.

Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Andres Freund <andres@anarazel.de>
Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/CAH2-WznneCXTzuFmcwx_EyRQgfsfJAAsu+CsqRFmFXCAar=nJw@mail.gmail.com
2021-04-06 08:49:22 -07:00
e6bdfd9700 Refactor HMAC implementations
Similarly to the cryptohash implementations, this refactors the existing
HMAC code into a single set of APIs that can be plugged with any crypto
libraries PostgreSQL is built with (only OpenSSL currently).  If there
is no such libraries, a fallback implementation is available.  Those new
APIs are designed similarly to the existing cryptohash layer, so there
is no real new design here, with the same logic around buffer bound
checks and memory handling.

HMAC has a dependency on cryptohashes, so all the cryptohash types
supported by cryptohash{_openssl}.c can be used with HMAC.  This
refactoring is an advantage mainly for SCRAM, that included its own
implementation of HMAC with SHA256 without relying on the existing
crypto libraries even if PostgreSQL was built with their support.

This code has been tested on Windows and Linux, with and without
OpenSSL, across all the versions supported on HEAD from 1.1.1 down to
1.0.1.  I have also checked that the implementations are working fine
using some sample results, a custom extension of my own, and doing
cross-checks across different major versions with SCRAM with the client
and the backend.

Author: Michael Paquier
Reviewed-by: Bruce Momjian
Discussion: https://postgr.es/m/X9m0nkEJEzIPXjeZ@paquier.xyz
2021-04-03 17:30:49 +09:00
26acb54a13 Revert "Enable parallel SELECT for "INSERT INTO ... SELECT ..."."
To allow inserts in parallel-mode this feature has to ensure that all the
constraints, triggers, etc. are parallel-safe for the partition hierarchy
which is costly and we need to find a better way to do that. Additionally,
we could have used existing cached information in some cases like indexes,
domains, etc. to determine the parallel-safety.

List of commits reverted, in reverse chronological order:

ed62d3737c Doc: Update description for parallel insert reloption.
c8f78b6161 Add a new GUC and a reloption to enable inserts in parallel-mode.
c5be48f092 Improve FK trigger parallel-safety check added by 05c8482f7f.
e2cda3c20a Fix use of relcache TriggerDesc field introduced by commit 05c8482f7f.
e4e87a32cc Fix valgrind issue in commit 05c8482f7f.
05c8482f7f Enable parallel SELECT for "INSERT INTO ... SELECT ...".

Discussion: https://postgr.es/m/E1lMiB9-0001c3-SY@gemulon.postgresql.org
2021-03-24 11:29:15 +05:30
bfa2cee784 Move bsearch_arg to src/port
Until now the bsearch_arg function was used only in extended statistics
code, so it was defined in that code.  But we already have qsort_arg in
src/port, so let's move it next to it.
2021-03-23 00:11:22 +01:00
2c75f8a612 Remove useless configure probe for <lz4/lz4.h>.
This seems to have been just copied-and-pasted from some other
header checks.  But our C code is entirely unprepared to support
such a header name, so it's only wasting cycles to look for it.
If we did need to support it, some #ifdefs would be required.

(A quick trawl at codesearch.debian.net finds some packages that
reference lz4/lz4.h; but they use *only* that spelling, and
appear to be intending to reference their own copy rather than
a system-level installation of liblz4.  There's no evidence of
freestanding installations that require this spelling.)

Discussion: https://postgr.es/m/457962.1616362509@sss.pgh.pa.us
2021-03-22 11:20:44 -04:00
4d399a6fbe Bring configure support for LZ4 up to snuff.
It's not okay to just shove the pkg_config results right into our
build flags, for a couple different reasons:

* This fails to maintain the separation between CPPFLAGS and CFLAGS,
as well as that between LDFLAGS and LIBS.  (The CPPFLAGS angle is,
I believe, the reason for warning messages reported when building
with MacPorts' liblz4.)

* If pkg_config emits anything other than -I/-D/-L/-l switches,
it's highly unlikely that we want to absorb those.  That'd be more
likely to break the build than do anything helpful.  (Even the -D
case is questionable; but we're doing that for libxml2, so I kept it.)

Also, it's not okay to skip doing an AC_CHECK_LIB probe, as
evidenced by recent build failure on topminnow; that should
have been caught at configure time.

Model fixes for this on configure's libxml2 support.

It appears that somebody overlooked an autoheader run, too.

Discussion: https://postgr.es/m/20210119190720.GL8560@telsasoft.com
2021-03-21 17:20:17 -04:00