We're currently maintaining two outputs for cube regression test. But that
appears to be unsuitable, because these outputs are different in out few checks
involving scientific notation. So, split checks involving scientific notation
into separate test, making contrib/cube easier to maintain. Backpatch to all
supported versions in order to make further backpatching easier.
Discussion: https://postgr.es/m/CAPpHfdvJgWjxHsJTtT%2Bo1tz3OR8EFHcLQjhp-d3%2BUcmJLh-fQA%40mail.gmail.com
Author: Alexander Korotkov
Backpatch-through: 9.3
In general, Postgres requires -fno-strict-aliasing with compilers that
implement C99 strict aliasing rules. There's little hope of getting
rid of that overall. But it seems like it would be a good idea if
storage/checksum_impl.h in particular didn't depend on it, because
that header is explicitly intended to be included by external programs.
We don't have a lot of control over the compiler switches that an
external program might use, as shown by Michael Banck's report of
failure in a privately-modified version of pg_verify_checksums.
Hence, switch to using a union in place of willy-nilly pointer casting
inside this file. I think this makes the code a bit more readable
anyway.
checksum_impl.h hasn't changed since it was introduced in 9.3,
so back-patch all the way.
Discussion: https://postgr.es/m/1535618100.1286.3.camel@credativ.de
Commit f49842d, which added support for partitionwise joins, built the
child's tlist by applying adjust_appendrel_attrs() to the parent's. So in
the case where the parent's included a whole-row Var for the parent, the
child's contained a ConvertRowtypeExpr. To cope with that, that commit
added code to the planner, such as setrefs.c, but some code paths still
assumed that the tlist for a scan (or join) rel would only include Vars
and PlaceHolderVars, which was true before that commit, causing errors:
* When creating an explicit sort node for an input path for a mergejoin
path for a child join, prepare_sort_from_pathkeys() threw the 'could not
find pathkey item to sort' error.
* When deparsing a relation participating in a pushed down child join as a
subquery in contrib/postgres_fdw, get_relation_column_alias_ids() threw
the 'unexpected expression in subquery output' error.
* When performing set_plan_references() on a local join plan generated by
contrib/postgres_fdw for EvalPlanQual support for a pushed down child
join, fix_join_expr() threw the 'variable not found in subplan target
lists' error.
To fix these, two approaches have been proposed: one by Ashutosh Bapat and
one by me. While the former keeps building the child's tlist with a
ConvertRowtypeExpr, the latter builds it with a whole-row Var for the
child not to violate the planner assumption, and tries to fix it up later,
But both approaches need more work, so refuse to generate partitionwise
join paths when whole-row Vars are involved, instead. We don't need to
handle ConvertRowtypeExprs in the child's tlists for now, so this commit
also removes the changes to the planner.
Previously, partitionwise join computed attr_needed data for each child
separately, and built the child join's tlist using that data, which also
required an extra step for adding PlaceHolderVars to that tlist, but it
would be more efficient to build it from the parent join's tlist through
the adjust_appendrel_attrs() transformation. So this commit builds that
list that way, and simplifies build_joinrel_tlist() and placeholder.c as
well as part of set_append_rel_size() to basically what they were before
partitionwise join went in.
Back-patch to PG11 where partitionwise join was introduced.
Report by Rajkumar Raghuwanshi. Analysis by Ashutosh Bapat, who also
provided some of regression tests. Patch by me, reviewed by Robert Haas.
Discussion: https://postgr.es/m/CAKcux6ktu-8tefLWtQuuZBYFaZA83vUzuRd7c1YHC-yEWyYFpg@mail.gmail.com
To verify the checksums, we open the file in text mode which doesn't work
on Windows as WIN32 treats Control-Z as EOF in files opened in text mode.
This leads to "short read of block .." error in some cases.
Fix it by opening the files in the binary mode.
Author: Amit Kapila
Reviewed-by: Magnus Hagander
Backpatch-through: 11
Discussion: https://postgr.es/m/CAA4eK1+LOnzod+h85FGmyjWzXKy-XV1FYwEyP-Tky2WpD5cxwA@mail.gmail.com
Instead of repeating the almost same large query in each version branch,
use one query and add a few columns to the SELECT list depending on the
version. This saves a lot of duplication.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Using -d is odd, because we normally reserve that for a database
argument, so rename it to -v and add long version --verbose.
Also, reduce it to emit one line per file checked rather than one line
per block.
Per a complaint from Michael Banck.
Author: Yugo Nagata <nagata@sraoss.co.jp>
Reviewed-by: Michael Banck <michael.banck@credativ.de>
Discussion: https://postgr.es/m/20180827113411.GA22768@nighthawk.caipicrew.dd-dns.de
Add support for error position reporting for the expressions contained
in defaults and check constraint definitions. This currently works only
for CREATE TABLE, not ALTER TABLE, because the latter is not set up to
pass around the original query string.
Reviewed-by: Fabien COELHO <coelho@cri.ensmp.fr>
When a postmaster gets into its phase PM_STARTUP, it would start
background workers using BgWorkerStart_PostmasterStart mode immediately,
which would cause problems for a fast shutdown as the postmaster forgets
to send SIGTERM to already-started background workers. With smart and
immediate shutdowns, this correctly happened, and fast shutdown is the
only mode missing the shot.
Author: Alexander Kukushkin
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/CAFh8B=mvnD8+DZUfzpi50DoaDfZRDfd7S=gwj5vU9GYn8UvHkA@mail.gmail.com
Backpatch-through: 9.5
This function had a blacklist of dump object types that it believed
needed exclusive lock ... but we hadn't maintained that, so that it
was missing ROW SECURITY, POLICY, and INDEX ATTACH items, all of
which need (or should be treated as needing) exclusive lock.
Since the same oversight seems likely in future, let's reverse the
sense of the test so that the code has a whitelist of safe object
types; better to wrongly assume a command can't be run in parallel
than the opposite. Currently the only POST_DATA object type that's
safe is CREATE INDEX ... and that list hasn't changed in a long time.
Back-patch to 9.5 where RLS came in.
Discussion: https://postgr.es/m/11450.1535483506@sss.pgh.pa.us
Ensure the TOC entry is marked with the correct schema, so that its
name is as unique as the index's is.
Fix the dependencies: we want dependencies from this TOC entry to the
two indexes it depends on, and we don't care (at least not for this
purpose) what order the indexes are created in. Also, add dependencies
on the indexes' underlying tables. Those might seem pointless given
the index dependencies, but they are helpful to cue parallel restore
to avoid running the ATTACH PARTITION in parallel with other DDL on
the same tables.
Discussion: https://postgr.es/m/10817.1535494963@sss.pgh.pa.us
Now that we have TAP tests, a contrib module may have something useful
to do in "make check" even if it has no pg_regress-style regression
scripts, and hence no REGRESS setting. But the TAP tests will fail,
or else test the wrong installed files, unless we install the contrib
module into the temp installation. So move the bit about adding to
EXTRA_INSTALL so that it applies regardless.
We might want this in back branches in future, but for the moment
I only risked adding it to v11.
Discussion: https://postgr.es/m/12438.1535488750@sss.pgh.pa.us
Commit aa09cd242 changed a condition in find_em_expr_for_rel from
being a bms_equal comparison of relids to bms_is_subset, in order to
support order by clauses on foreign joins. But this also allows
through the degenerate case of expressions with no Vars at all (and
hence empty relids), including integer constants which will be parsed
unexpectedly on the remote (viz. "ERROR: ORDER BY position 0 is not in
select list" as in the bug report).
Repair by adding an additional !bms_is_empty test.
Backpatch through to 9.6 where the aforementioned change was made.
Per bug #15352 from Maksym Boguk; analysis and patch by me.
Discussion: https://postgr.es/m/153518420278.1478.14875560810251994661@wrigleys.postgresql.org
Like oid2name, vacuumlo has been lacking consistency with other
utilities for its options:
- Connection options gain long aliases.
- Document environment variables which could be used: PGHOST, PGPORT and
PGUSER.
Documentation and code is reordered to be more consistent. A basic set
of TAP tests has been added while on it.
Author: Tatsuro Yamada
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/c7e7f25c-1747-cd0f-9335-390bc97b2db5@lab.ntt.co.jp
oid2name has done little effort to keep an interface consistent with
other binary utilities:
- -H was used instead of -h/-host. This option is now marked as
deprecated, still its output is accepted to be backward-compatible.
- -P has been removed from the code, and was still documented.
- All options gain long aliases, making connection options more similar
to other binaries.
- Document environment variables which could be used: PGHOST, PGPORT and
PGUSER.
A basic set of TAP tests is added on the way, and documentation is
cleaned up to be more consistent with other things.
Author: Tatsuro Yamada
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/c7e7f25c-1747-cd0f-9335-390bc97b2db5@lab.ntt.co.jp
regexp_matches, regexp_split_to_table and regexp_split_to_array all
work by compiling a list of match positions as character offsets (NOT
byte positions) in the source string.
Formerly, they then used text_substr to extract the matched text; but
in a multi-byte encoding, that counts the characters in the string,
and the characters needed to reach the starting byte position, on
every call. Accordingly, the performance degraded as the product of
the input string length and the number of match positions, such that
splitting a string of a few hundred kbytes could take many minutes.
Repair by keeping the wide-character copy of the input string
available (only in the case where encoding_max_length is not 1) after
performing the match operation, and extracting substrings from that
instead. This reduces the complexity to being linear in the number of
result bytes, discounting the actual regexp match itself (which is not
affected by this patch).
In passing, remove cleanup using retail pfree() which was obsoleted by
commit ff428cded (Feb 2008) which made cleanup of SRF multi-call
contexts automatic. Also increase (to ~134 million) the maximum number
of matches and provide an error message when it is reached.
Backpatch all the way because this has been wrong forever.
Analysis and patch by me; review by Kaiting Chen.
Discussion: https://postgr.es/m/87pnyn55qh.fsf@news-spur.riddles.org.uk
see also https://postgr.es/m/87lg996g4r.fsf@news-spur.riddles.org.uk
The source code was already set up for NLS support, so just a nls.mk
file needed to be added. Also, fix the old problem of putting the int64
format specifier right into the string, which breaks NLS.
Fix reference to non-existent file in comment.
Add SH_ prefix to the EMPTY and IN_USE tokens, to reduce likelihood of
collisions with unrelated macros.
Add include guards around the function definitions that are not
"parameterized", so the header can be used again in the same translation
unit.
Undefine SH_EQUAL macro where other "parameter" macros are undefined, for
the same reason.
Author: Thomas Munro
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAEepm%3D1LdXZ3mMTM8tHt_b%3DK1kREit%3Dp8sikesak%3DkzHHM07Nw%40mail.gmail.com
The problem arises with the combination of CALL with output parameters
and doing a COMMIT inside the procedure. When a CALL has output
parameters, the portal uses the strategy PORTAL_UTIL_SELECT instead of
PORTAL_MULTI_QUERY. Using PORTAL_UTIL_SELECT causes the portal's
snapshot to be registered with the current resource
owner (portal->holdSnapshot); see
9ee1cf04ab6bcefe03a11837b53f29ca9dc24c7a for the reason.
Normally, PortalDrop() unregisters the snapshot. If not, then
ResourceOwnerRelease() will print a warning about a snapshot leak on
transaction commit. A transaction commit normally drops all
portals (PreCommit_Portals()), except the active portal. So in case of
the active portal, we need to manually release the snapshot to avoid the
warning.
Reported-by: Prabhat Sahu <prabhat.sahu@enterprisedb.com>
Reviewed-by: Jonathan S. Katz <jkatz@postgresql.org>
The archive should show a dependency on the item's table, but it failed
to include one. This could cause failures in parallel restore due to
emitting ALTER TABLE ... ENABLE ROW LEVEL SECURITY before restoring
the table's data. In practice the odds of a problem seem low, since
you would typically need to have set FORCE ROW LEVEL SECURITY as well,
and you'd also need a very high --jobs count to have any chance of this
happening. That probably explains the lack of field reports.
Still, it's a bug, so back-patch to 9.5 where RLS was introduced.
Discussion: https://postgr.es/m/19784.1535390902@sss.pgh.pa.us
A caller of VACUUM can perform early lookup obtention which can cause
other sessions to block on the request done, causing potentially DOS
attacks as even a non-privileged user can attempt a vacuum fill of a
critical catalog table to block even all incoming connection attempts.
Contrary to TRUNCATE, a client could attempt a system-wide VACUUM after
building the list of relations to VACUUM, which can cause vacuum_rel()
or analyze_rel() to try to lock the relation but the operation would
just block. When the client specifies a list of relations and the
relation needs to be skipped, ownership checks are done when building
the list of relations to work on, preventing a later lock attempt.
vacuum_rel() already had the sanity checks needed, except that those
were applied too late. This commit refactors the code so as relation
skips are checked beforehand, making it safer to avoid too early locks,
for both manual VACUUM with and without a list of relations specified.
An isolation test is added emulating the fact that early locks do not
happen anymore, issuing a WARNING message earlier if the user calling
VACUUM is not a relation owner.
When a partitioned table is listed in a manual VACUUM or ANALYZE
command, its full list of partitions is fetched, all partitions get
added to the list to work on, and then each one of them is processed one
by one, with ownership checks happening at the later phase of
vacuum_rel() or analyze_rel(). Trying to do early ownership checks for
each partition is proving to be tedious as this would result in deadlock
risks with lock upgrades, and skipping all partitions if the listed
partitioned table is not owned would result in a behavior change
compared to how Postgres 10 has implemented vacuum for partitioned
tables. The original problem reported related to early lock queue for
critical relations is fixed anyway, so priority is given to avoiding a
backward-incompatible behavior.
Reported-by: Lloyd Albin, Jeremy Schneider
Author: Michael Paquier
Reviewed by: Nathan Bossart, Kyotaro Horiguchi
Discussion: https://postgr.es/m/152512087100.19803.12733865831237526317@wrigleys.postgresql.org
Discussion: https://postgr.es/m/20180812222142.GA6097@paquier.xyz
The previous coding figured it'd be good enough to postpone opening
the first CSV log file until we got a message we needed to write there.
This is unsafe, though, because if the open fails we end up in infinite
recursion trying to report the failure. Instead make the CSV log file
management code look as nearly as possible like the longstanding logic
for the stderr log file. In particular, open it immediately at postmaster
startup (if enabled), or when we get a SIGHUP in which we find that
log_destination has been changed to enable CSV logging.
It seems OK to fail if a postmaster-start-time open attempt fails, as
we've long done for the stderr log file. But we can't die if we fail
to open a CSV log file during SIGHUP, so we're still left with a problem.
In that case, write any output meant for the CSV log file to the stderr
log file. (This will also cover race-condition cases in which backends
send CSV log data before or after we have the CSV log file open.)
This patch also fixes an ancient oversight that, if CSV logging was
turned off during a SIGHUP, we never actually closed the last CSV
log file.
In passing, remember to reset whereToSendOutput = DestNone during syslogger
start, since (unlike all other postmaster children) it's forked before the
postmaster has done that. This made for a platform-dependent difference
in error reporting behavior between the syslogger and other children:
except on Windows, it'd report problems to the original postmaster stderr
as well as the normal error log file(s). It's barely possible that that
was intentional at some point; but it doesn't seem likely to be desirable
in production, and the platform dependency definitely isn't desirable.
Per report from Alexander Kukushkin. It's been like this for a long time,
so back-patch to all supported branches.
Discussion: https://postgr.es/m/CAFh8B==iLUD_gqC-dAENS0V+kVrCeGiKujtKqSQ7++S-caaChw@mail.gmail.com
Mention that "Latest checkpoint location" will not match in pg_upgrade
if the standby server is still running during the upgrade, which is
possible. "Match" text first appeared in PG 9.5.
Reported-by: Paul Bonaud
Discussion: https://postgr.es/m/c7268794-edb4-1772-3bfd-04c54585c24e@trainline.com
Backpatch-through: 9.5
While we generally don't sweat too much about "may be used uninitialized"
warnings from older compilers, I noticed that there's a fair number of
buildfarm animals that are producing such a warning *only* for this
variable. So it seems worth silencing.
In 86d78ef50e01 I enabled configure to check for C99 support, with the
goal of checking which platforms support C99. While there are a few
machines without C99 support among our buildfarm animals,
de-supporting them for v12 was deemed acceptable.
While not tested in aforementioned commit, the biggest increase in
minimum compiler version comes from MSVC, which gained C99 support
fairly late. The subset in MSVC 2013 is sufficient for our needs, at
this point. While that is a significant increase in minimum version,
the existing windows binaries are already built with a new enough
version.
Make configure error out if C99 support could not be detected. For
MSVC builds, increase the minimum version to 2013.
The increase to MSVC 2013 allows us to get rid of VCBuildProject.pm,
as that was only required for MSVC 2005/2008.
Author: Andres Freund
Discussion: https://postgr.es/m/97d4b165-192d-3605-749c-f614a0c4e783@2ndquadrant.com
A VACUUM or ANALYZE command listing directly a partitioned table expands
it to its partitions, causing all elements of a tree to be processed
with individual ownership checks done. This results in different
relation skips depending on the ownership policy of a tree, which may
not be consistent for a partition tree. This commit adds more tests to
ensure that any future refactoring allows to keep a consistent behavior,
or at least that any changes done are easily identified and checked.
The current behavior of VACUUM with partitioned tables is present since
10.
Author: Nathan Bossart
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/DC186201-B01F-4A66-9EC4-F855A957C1F9@amazon.com
Code in slot_getallattrs() is the same as if slot_getsomeattrs() is
called with number of attributes specified in the tuple
descriptor. Implement it that way instead of duplicating the code
between those two functions.
This is part of a patchseries abstracting TupleTableSlots so they can
store arbitrary forms of tuples, but is a nice enough cleanup on its
own.
Author: Ashutosh Bapat
Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/20180220224318.gw4oe5jadhpmcdnm@alap3.anarazel.de
Commits c6b3c939b (which fixed the precedence of >=, <=, <> operators)
and 865f14a2d (which added support for the standard => notation for
named arguments) created a class of lexer tokens which look like
multi-character operators but which have their own token IDs distinct
from Op. However, longest-match rules meant that following any of
these tokens with another operator character, as in (1<>-1), would
cause them to be incorrectly returned as Op.
The error here isn't immediately obvious, because the parser would
usually still find the correct operator via the Op token, but there
were more subtle problems:
1. If immediately followed by a comment or +-, >= <= <> would be given
the old precedence of Op rather than the correct new precedence;
2. If followed by a comment, != would be returned as Op rather than as
NOT_EQUAL, causing it not to be found at all;
3. If followed by a comment or +-, the => token for named arguments
would be lexed as Op, causing the argument to be mis-parsed as a
simple expression, usually causing an error.
Fix by explicitly checking for the operators in the {operator} code
block in addition to all the existing special cases there.
Backpatch to 9.5 where the problem was introduced.
Analysis and patch by me; review by Tom Lane.
Discussion: https://postgr.es/m/87va851ppl.fsf@news-spur.riddles.org.uk
The lexer's handling of operators contained an O(N^3) hazard when
dealing with long strings of + or - characters; it seems hard to
prevent this case from being O(N^2), but the additional N multiplier
was not needed.
Backpatch all the way since this has been there since 7.x, and it
presents at least a mild hazard in that trying to do Bind, PREPARE or
EXPLAIN on a hostile query could take excessive time (without
honouring cancels or timeouts) even if the query was never executed.
Historically, we looked up the target hostname in connectDBStart, so that
PQconnectPoll did not need to do DNS name resolution. The patches that
added multiple-target-host support to libpq preserved this division of
labor; but it's really nonsensical now, because it means that if any one
of the target hosts fails to resolve in DNS, the connection fails. That
negates the no-single-point-of-failure goal of the feature. Additionally,
DNS lookups aren't exactly cheap, but the code did them all even if the
first connection attempt succeeds.
Hence, rearrange so that PQconnectPoll does the lookups, and only looks
up a hostname when it's time to try that host. This does mean that
PQconnectPoll could block on a DNS lookup --- but if you wanted to avoid
that, you should be using hostaddr, as the documentation has always
specified. It seems fairly unlikely that any applications would really
care whether the lookup occurs inside PQconnectStart or PQconnectPoll.
In addition to calling out that fact explicitly, do some other minor
wordsmithing in the docs around the multiple-target-host feature.
Since this seems like a bug in the multiple-target-host feature,
backpatch to v10 where that was introduced. In the back branches,
avoid moving any existing fields of struct pg_conn, just in case
any third-party code is looking into that struct.
Tom Lane, reviewed by Fabien Coelho
Discussion: https://postgr.es/m/4913.1533827102@sss.pgh.pa.us
This test was supposed to check the interaction of INOUT and default
parameters in a procedure call, but it only checked the case where the
parameter was not supplied. Now it also checks the case where the
parameter was supplied. It was already working correctly, so no code
changes required.