The current implementation supports exactly one IP address in a server
certificate's Common Name, which is brittle (the strings must match
exactly). This patch adds support for IPv4 and IPv6 addresses in a
server's Subject Alternative Names.
Per discussion on-list:
- If the client's expected host is an IP address, we allow fallback to
the Subject Common Name if an iPAddress SAN is not present, even if
a dNSName is present. This matches the behavior of NSS, in
violation of the relevant RFCs.
- We also, counter-intuitively, match IP addresses embedded in dNSName
SANs. From inspection this appears to have been the behavior since
the SAN matching feature was introduced in acd08d76.
- Unlike NSS, we don't map IPv4 to IPv6 addresses, or vice-versa.
Author: Jacob Champion <pchampion@vmware.com>
Co-authored-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://www.postgresql.org/message-id/flat/9f5f20974cd3a4091a788cf7f00ab663d5fcdffe.camel@vmware.com
This makes the code more consistent with SpGiST, GiST and GIN, that
already use this style, and the idea is to make easier the introduction
of more sanity checks for each of these AM-specific macros. BRIN uses a
different set of macros to get a page's type and flags, so it has no
need for something similar.
Author: Matthias van de Meent
Discussion: https://postgr.es/m/CAEze2WjE3+tGO9Fs9+iZMU+z6mMZKo54W1Zt98WKqbEUHbHOBg@mail.gmail.com
This commit includes a set of improvements to help with the debugging of
failures in these new TAP tests:
- Instead of a plain diff command to compare the dumps generated, use
File::Compare::compare for the same effect. diff is still used to
provide more context in the event of an error.
- Log the contents of regression.diffs if the pg_regress command fails.
- Unify the format of the logs generated, getting inspiration from the
style used in 027_stream_regress.pl.
wrasse is the only buildfarm member that has reported a failure until
now after the introduction of 322becb, complaining that the dumps
generated do not match, and I am lacking information to understand what
is going in this environment.
This simplifies a lot of code in the tests of pg_upgrade without
sacrificing its coverage:
- Removal of test.sh used for builds with make, that has accumulated
over the years tweaks for problems that are solved in a duplicated way
by the centralized TAP framework (initialization of the various
environment variables PG*, port selection).
- Removal of the code in MSVC to test pg_upgrade. This was roughly a
duplicate of test.sh adapted for Windows, with an extra footprint of
a pg_regress command and all the assumptions behind it.
Support for upgrades with older versions is changed, not removed.
test.sh was able to set up the regression database on the old instance
by launching itself the pg_regress command and a dependency to the
source tree of thd old cluster, with tweaks on the command arguments to
adapt across the versions used. This created a backward-compatibility
dependency with older pg_regress commands, and recent changes like
d1029bb have made that much more complicated.
Instead, this commit allows tests with older major versions by
specifying a path to a SQL dump (taken with pg_dumpall from the old
cluster's installation) that will be loaded into the old instance to
upgrade instead of running pg_regress, through an optional environment
variable called $olddump. This requires a second variable called
$oldinstall to point to the base path of the installation of the old
cluster. This method is more in line with the buildfarm client that
uses a set of static dumps to set up an old instance, so hopefully we
will be able to reuse what is introduced in this commit there. The last
step of the tests that checks for differences between the two dumps
taken still needs to be improved as it can fail, requiring a manual
lookup at the dumps. This is not different from the old way of testing
where things could fail at the last step.
Support for EXTRA_REGRESS_OPTS is kept. vcregress.pl in the MSVC
scripts still handles the test of pg_upgrade with its upgradecheck, and
bincheck is changed to skip pg_upgrade.
Author: Michael Paquier
Reviewed-by: Andrew Dunstan, Andres Freund, Rachel Heaton, Tom Lane,
Discussion: https://postgr.es/m/YJ8xTmLQkotVLpN5@paquier.xyz
Add exec_assign_value, exec_eval_datum, and exec_cast_value
to the set of functions a PL/pgSQL debugger plugin can
conveniently call. This allows more convenient manipulation
of the values of PL/pgSQL function variables.
Pavel Stehule, reviewed by Aleksander Alekseev and myself
Discussion: https://postgr.es/m/CAFj8pRD+dBPU0T-KrkP7ef6QNPDEsjYCejEsBe07NDq8TybOkA@mail.gmail.com
This patch is extracted from a larger patch that allowed setting the
default returned value from these functions to json or jsonb. That had
problems, but this piece of it is fine. For these functions only json or
jsonb can be specified in the RETURNING clause.
Extracted from an original patch from Nikita Glukhov
Reviewers have included (in no particular order) Andres Freund, Alexander
Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zihong Yu,
Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby.
Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru
postgres_fdw would push ORDER BY clauses to the remote side without
verifying that the sort operator is safe to ship. Moreover, it failed
to print a suitable USING clause if the sort operator isn't default
for the sort expression's type. The net result of this is that the
remote sort might not have anywhere near the semantics we expect,
which'd be disastrous for locally-performed merge joins in particular.
We addressed similar issues in the context of ORDER BY within an
aggregate function call in commit 7012b132d, but failed to notice
that query-level ORDER BY was broken. Thus, much of the necessary
logic already existed, but it requires refactoring to be usable
in both cases.
Back-patch to all supported branches. In HEAD only, remove the
core code's copy of find_em_expr_for_rel, which is no longer used
and really should never have been pushed into equivclass.c in the
first place.
Ronan Dunklau, per report from David Rowley;
reviews by David Rowley, Ranier Vilela, and myself
Discussion: https://postgr.es/m/CAApHDvr4OeC2DBVY--zVP83-K=bYrTD7F8SZDhN4g+pj2f2S-A@mail.gmail.com
When testing check-world it's hard to know what the test the test failure
output belongs to. The tap test output is especially problematic, partially
due to our practice of reusing test names like 001_basic.pl.
This isn't a real issue on the buildfarm, which invokes tests separately, but
locally and for CI it's quite annoying.
To fix, the test target provisos in Makefile.global.in now output
echo "+++ (regress|isolation|tap) [install-]check in $(subdir) +++"
before running the tests.
Discussion: https://postgr.es/m/20220330165039.3zseuiraxfjkksf5@alap3.anarazel.de
Commit fb16d2c658 used the old but not quite old enough parent module,
which dates to perl version 5.10.1 as a core module. We still have a
dinosaur or two running older versions of perl, so rather than require
an upgrade in those we simply do in place what parent.pm's import()
would have done for us.
Reviewed by Tom Lane
Discussion: https://postgr.es/m/474104.1648685981@sss.pgh.pa.us
This breaks out the fetch-it-all-and-print case in SendQuery() into a
separate function. This makes the code more similar to the other
cases \gdesc and run query with FETCH_COUNT, and makes SendQuery()
itself a bit smaller.
Extracted from a larger patch with more changes in this area to
follow.
Author: Fabien COELHO <coelho@cri.ensmp.fr>
Discussion: https://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1904132231510.8961@lancre
When we create or alter a subscription to add publications raise a warning
for non-existent publications. We don't want to give an error here because
it is possible that users can later create the missing publications.
Author: Vignesh C
Reviewed-by: Bharath Rupireddy, Japin Li, Dilip Kumar, Euler Taveira, Ashutosh Sharma, Amit Kapila
Discussion: https://postgr.es/m/CALDaNm0f4YujGW+q-Di0CbZpnQKFFrXntikaQQKuEmGG0=Zw=Q@mail.gmail.com
Compression with gzip has never been supported in the tar format of
pg_dump since this code has been introduced in c3e18804, as the use of
buffered I/O in gzdopen() changes the file positioning that tar
requires. The original idea behind the use of compression with the tar
mode is to be able to include compressed data files (named %u.dat.gz)
and blob files (blob_%u.dat.gz) in the tarball generated by the dump,
with toc.dat, that tracks down if compression is used in the dump,
always uncompressed.
Note that this commit removes the dump part of the code as well as the
restore part, removing any dependency to zlib in pg_backup_tar.c. There
could be an argument behind keeping around the restore part, but this
would require one to change the internals of a tarball previously dumped
so as data and blob files are compressed with toc.dat itself changed to
track down if compression is enabled. However, the argument about
gzdopen() still holds in the read case with pg_restore.
Removing this code simplifies future additions related to compression in
pg_dump.
Author: Georgios Kokolatos, Rachel Heaton
Discussion: https://postgr.es/m/faUNEOpts9vunEaLnmxmG-DldLSg_ql137OC3JYDmgrOMHm1RvvWY2IdBkv_CRxm5spCCb_OmKNk2T03TMm0fBEWveFF9wA1WizPuAgB7Ss=@protonmail.com
When evaluating a query with a multi-column GROUP BY clause using sort,
the cost may be heavily dependent on the order in which the keys are
compared when building the groups. Grouping does not imply any ordering,
so we're allowed to compare the keys in arbitrary order, and a Hash Agg
leverages this. But for Group Agg, we simply compared keys in the order
as specified in the query. This commit explores alternative ordering of
the keys, trying to find a cheaper one.
In principle, we might generate grouping paths for all permutations of
the keys, and leave the rest to the optimizer. But that might get very
expensive, so we try to pick only a couple interesting orderings based
on both local and global information.
When planning the grouping path, we explore statistics (number of
distinct values, cost of the comparison function) for the keys and
reorder them to minimize comparison costs. Intuitively, it may be better
to perform more expensive comparisons (for complex data types etc.)
last, because maybe the cheaper comparisons will be enough. Similarly,
the higher the cardinality of a key, the lower the probability we’ll
need to compare more keys. The patch generates and costs various
orderings, picking the cheapest ones.
The ordering of group keys may interact with other parts of the query,
some of which may not be known while planning the grouping. E.g. there
may be an explicit ORDER BY clause, or some other ordering-dependent
operation, higher up in the query, and using the same ordering may allow
using either incremental sort or even eliminate the sort entirely.
The patch generates orderings and picks those minimizing the comparison
cost (for various pathkeys), and then adds orderings that might be
useful for operations higher up in the plan (ORDER BY, etc.). Finally,
it always keeps the ordering specified in the query, on the assumption
the user might have additional insights.
This introduces a new GUC enable_group_by_reordering, so that the
optimization may be disabled if needed.
The original patch was proposed by Teodor Sigaev, and later improved and
reworked by Dmitry Dolgov. Reviews by a number of people, including me,
Andrey Lepikhov, Claudio Freire, Ibrar Ahmed and Zhihong Yu.
Author: Dmitry Dolgov, Teodor Sigaev, Tomas Vondra
Reviewed-by: Tomas Vondra, Andrey Lepikhov, Claudio Freire, Ibrar Ahmed, Zhihong Yu
Discussion: https://postgr.es/m/7c79e6a5-8597-74e8-0671-1c39d124c9d6%40sigaev.ru
Discussion: https://postgr.es/m/CA%2Bq6zcW_4o2NC0zutLkOJPsFt80megSpX_dVRo6GK9PC-Jx_Ag%40mail.gmail.com
This Patch introduces three SQL standard JSON functions:
JSON() (incorrectly mentioned in my commit message for f4fb45d15c)
JSON_SCALAR()
JSON_SERIALIZE()
JSON() produces json values from text, bytea, json or jsonb values, and
has facilitites for handling duplicate keys.
JSON_SCALAR() produces a json value from any scalar sql value, including
json and jsonb.
JSON_SERIALIZE() produces text or bytea from input which containis or
represents json or jsonb;
For the most part these functions don't add any significant new
capabilities, but they will be of use to users wanting standard
compliant JSON handling.
Nikita Glukhov
Reviewers have included (in no particular order) Andres Freund, Alexander
Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zihong Yu,
Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby.
Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru
We do this via a subclass for any branch older than the minimum known
to be compatible with the main package (currently release 12).
This should be useful for constructing cross-version tests.
In theory this could be extended back any number of versions, with
varying degrees of compatibility.
Reviewed by Michael Paquier and Dagfinn Ilmari Mannsåker
Discussion: https://postgr.es/m/a3efd19a-d5c9-fdf2-6094-4cde056a2708@dunslane.net
libzstd allows transparent parallel compression just by setting
an option when creating the compression context, so permit that
for both client and server-side backup compression. To use this,
use something like pg_basebackup --compress WHERE-zstd:workers=N
where WHERE is "client" or "server" and N is an integer.
When compression is performed on the server side, this will spawn
threads inside the PostgreSQL backend. While there is almost no
PostgreSQL server code which is thread-safe, the threads here are used
internally by libzstd and touch only data structures controlled by
libzstd.
Patch by me, based in part on earlier work by Dipesh Pandit
and Jeevan Ladhe. Reviewed by Justin Pryzby.
Discussion: http://postgr.es/m/CA+Tgmobj6u-nWF-j=FemygUhobhryLxf9h-wJN7W-2rSsseHNA@mail.gmail.com
SSL has become the de facto term to mean an end-to-end encrypted channel
regardless of protocol used, even though the SSL protocol is deprecated.
Clarify what we mean with SSL in our documentation, especially for new
users who might be looking for TLS.
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/D4ABB281-6CFD-46C6-A4E0-8EC23A2977BC@yesql.se
COPY FROM supports the HEADER option to silently discard the header
line from a CSV or text file. It is possible to load by mistake a
file that matches the expected format, for example, if two text
columns have been swapped, resulting in garbage in the database.
This adds a new option value HEADER MATCH that checks the column names
in the header line against the actual column names and errors out if
they do not match.
Author: Rémi Lapeyre <remi.lapeyre@lenstra.fr>
Reviewed-by: Daniel Verite <daniel@manitou-mail.org>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/CAF1-J-0PtCWMeLtswwGV2M70U26n4g33gpe1rcKQqe6wVQDrFA@mail.gmail.com
The current logical replication behavior is to send every transaction to
subscriber even if the transaction is empty. This can happen because
transaction doesn't contain changes from the selected publications or all
the changes got filtered. It is a waste of CPU cycles and network
bandwidth to build/transmit these empty transactions.
This patch addresses the above problem by postponing the BEGIN message
until the first change is sent. While processing a COMMIT message, if
there was no other change for that transaction, do not send the COMMIT
message. This allows us to skip sending BEGIN/COMMIT messages for empty
transactions.
When skipping empty transactions in synchronous replication mode, we send
a keepalive message to avoid delaying such transactions.
Author: Ajin Cherian, Hou Zhijie, Euler Taveira
Reviewed-by: Peter Smith, Takamichi Osumi, Shi Yu, Masahiko Sawada, Greg Nancarrow, Vignesh C, Amit Kapila
Discussion: https://postgr.es/m/CAMkU=1yohp9-dv48FLoSPrMqYEyyS5ZWkaZGD41RJr10xiNo_Q@mail.gmail.com
Curently, some TAP test that directly call the underlying function
PostgreSQL::Test::Utils::run_log() care about the return value, but
none of those that call it via PostgreSQL::Test::Cluster::run_log() care.
However, I'd like to add a test that will care, so adjust this function
to return whatever it gets back from the underlying function, just as
we do for a number of other functions in this module.
Discussion: http://postgr.es/m/CA+Tgmobj6u-nWF-j=FemygUhobhryLxf9h-wJN7W-2rSsseHNA@mail.gmail.com
This introduces the SQL/JSON functions for querying JSON data using
jsonpath expressions. The functions are:
JSON_EXISTS()
JSON_QUERY()
JSON_VALUE()
All of these functions only operate on jsonb. The workaround for now is
to cast the argument to jsonb.
JSON_EXISTS() tests if the jsonpath expression applied to the jsonb
value yields any values. JSON_VALUE() must return a single value, and an
error occurs if it tries to return multiple values. JSON_QUERY() must
return a json object or array, and there are various WRAPPER options for
handling scalar or multi-value results. Both these functions have
options for handling EMPTY and ERROR conditions.
Nikita Glukhov
Reviewers have included (in no particular order) Andres Freund, Alexander
Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zihong Yu,
Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby.
Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru
Linux thinks that something like "createdb foo -S bar" is perfectly
fine, but Windows wants the options to precede any bare arguments, so
we must write "createdb -S bar foo" for portability.
Per reports from CI, the buildfarm, and Andres.
Discussion: http://postgr.es/m/20220329173536.7d2ywdatsprxl4x6@alap3.anarazel.de
Because this strategy logs changes on a block-by-block basis, it
avoids the need to checkpoint before and after the operation.
However, because it logs each changed block individually, it might
generate a lot of extra write-ahead logging if the template database
is large. Therefore, the older strategy remains available via a new
STRATEGY parameter to CREATE DATABASE, and a corresponding --strategy
option to createdb.
Somewhat controversially, this patch assembles the list of relations
to be copied to the new database by reading the pg_class relation of
the template database. Cross-database access like this isn't normally
possible, but it can be made to work here because there can't be any
connections to the database being copied, nor can it contain any
in-doubt transactions. Even so, we have to use lower-level interfaces
than normal, since the table scan and relcache interfaces will not
work for a database to which we're not connected. The advantage of
this approach is that we do not need to rely on the filesystem to
determine what ought to be copied, but instead on PostgreSQL's own
knowledge of the database structure. This avoids, for example,
copying stray files that happen to be located in the source database
directory.
Dilip Kumar, with a fairly large number of cosmetic changes by me.
Reviewed and tested by Ashutosh Sharma, Andres Freund, John Naylor,
Greg Nancarrow, Neha Sharma. Additional feedback from Bruce Momjian,
Heikki Linnakangas, Julien Rouhaud, Adam Brusselback, Kyotaro
Horiguchi, Tomas Vondra, Andrew Dunstan, Álvaro Herrera, and others.
Discussion: http://postgr.es/m/CA+TgmoYtcdxBjLh31DLxUXHxFVMPGzrU5_T=CYCvRyFHywSBUQ@mail.gmail.com
The original first half of the example used an employees table and an
accounts.sales_person foreign key column, while the second half (added
in commit 8f889b1083f) used a salesmen table and accounts.sales_id
for the foreign key. This makes everything use the original names.
Author: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Discussion: https://postgr.es/m/87o81vqjw0.fsf@wibble.ilmari.org