This is the converse of the unsafe-usage-of-%m problem: the reason
ereport/elog provide that format code is mainly to dodge the hazard
of errno getting changed before control reaches functions within the
arguments of the macro. I only found one instance of this hazard,
but it's been there since 9.4 :-(.
While glibc's version of printf accepts %m, most others do not;
to be portable, we have to do it the hard way with strerror(errno).
pg_verify_checksums evidently did not get that memo.
Noted while fooling around with NetBSD-current, which generates
a compiler warning for this mistake.
The "l" (ell) width spec means something in the corresponding scanf usage,
but not here. While modern POSIX says that applying "l" to "f" and other
floating format specs is a no-op, SUSv2 says it's undefined. Buildfarm
experience says that some old compilers emit warnings about it, and at
least one old stdio implementation (mingw's "ANSI" option) actually
produces wrong answers and/or crashes.
Discussion: https://postgr.es/m/21670.1526769114@sss.pgh.pa.us
Discussion: https://postgr.es/m/c085e1da-0d64-1c15-242d-c921f32e0d5c@dunslane.net
FindDefinedSymbol was intended to take an array of possible include
paths, but it never actually worked correctly for any but the first
array element. Since there's no use-case for more than one path
anyway, let's just simplify this code and its callers by redefining
it as taking only one include path.
Minor other code-beautification without functional effects, except
that in one place we format the output as pgindent would do.
John Naylor
Discussion: https://postgr.es/m/CAJVSVGXM_n32hTTkircW4_K1LQFsJNb6xjs0pAP4QC0ZpyJfPQ@mail.gmail.com
Ancient HPUX, for one, does this. We hadn't noticed due to the lack
of regression tests that required a working strtoll.
(I was slightly tempted to remove the other historical spelling,
strto[u]q, since it seems we have no buildfarm members testing that case.
But I refrained.)
Discussion: https://postgr.es/m/151935568942.1461.14623890240535309745@wrigleys.postgresql.org
Buildfarm member dromedary is still unhappy about the recently-added
ecpg "long long" tests. The reason turns out to be that it includes
"-ansi" in its CFLAGS, and in their infinite wisdom Apple have decided
to hide the declarations of strtoll/strtoull in C89-compliant builds.
(I find it pretty curious that they hide those function declarations
when you can nonetheless declare a "long long" variable, but anyway
that is their behavior, both on dromedary's obsolete macOS version and
the newest and shiniest.) As a result, gcc assumes these functions
return "int", leading naturally to wrong results.
(Looking at dromedary's past build results, it's evident that this
problem also breaks pg_strtouint64() on 32-bit platforms; but we
evidently have no regression tests that exercise that function with
values above 32 bits.)
To fix, supply declarations for these functions when the platform
provides the functions but not the declarations, using the same type
of mechanism as we use for some other similar cases.
Discussion: https://postgr.es/m/151935568942.1461.14623890240535309745@wrigleys.postgresql.org
This will only actually exercise the "long long" code paths on platforms
where "long" is 32 bits --- otherwise, the SQL bigint type maps to
plain "long", and we will test that code path instead. But that's
probably sufficient coverage, and anyway we weren't testing either
code path before.
Dang Minh Huong, tweaked a bit by me
Discussion: https://postgr.es/m/151935568942.1461.14623890240535309745@wrigleys.postgresql.org
This is needed for full support of "long long" variables in ecpg, but
the previous patch for bug #15080 (commits 51057feaa et al) missed it.
In MSVC versions where the functions don't exist under those names,
we can nonetheless use _strtoi64() and _strtoui64().
Like the previous patch, back-patch all the way.
Dang Minh Huong
Discussion: https://postgr.es/m/151935568942.1461.14623890240535309745@wrigleys.postgresql.org
Use DISCARD PLANS instead of a reconnect to force reconstruction of
a cached plan; this corresponds more nearly to what people might
actually do in practice.
Commit bad51a49a tried to use a shortcut with just one stamp file
recording the actions of generating the pg_*_d.h headers and copying
them to the src/include/catalog/ directory. That doesn't work in all
scenarios though, so we must use two stamp files like the Makefiles do.
John Naylor
Discussion: https://postgr.es/m/CANFyU944GdHr=puPbA78STnqr=8kgMrGF-VDHck6aO_-qNDALg@mail.gmail.com
In commit 6bdf1303b, we ensured that power()/^ for float8 would honor
the NaN behaviors specified by POSIX standards released in this century,
ie NaN ^ 0 = 1 and 1 ^ NaN = 1. However, numeric_power() was not
touched and continued to follow the once-common behavior that every
case involving NaN input produces NaN. For consistency, let's switch
the numeric behavior to the modern spec in the same release that ensures
that behavior for float8.
(Note that while 6bdf1303b was initially back-patched, we later undid
that, concluding that any behavioral change should appear only in v11.)
Discussion: https://postgr.es/m/10898.1526421338@sss.pgh.pa.us
Up to now, it's been safe for plpgsql to store TOAST pointers in its
variables because the ActiveSnapshot for whatever query called the plpgsql
function will surely protect such TOAST values from being vacuumed away,
even if the owning table rows are committed dead. With the introduction of
procedures, that assumption is no longer good in "non atomic" executions
of plpgsql code. We adopt the slightly brute-force solution of detoasting
all TOAST pointers at the time they are stored into variables, if we're in
a non-atomic context, just in case the owning row goes away.
Some care is needed to avoid long-term memory leaks, since plpgsql tends
to run with CurrentMemoryContext pointing to its call-lifespan context,
but we shouldn't assume that no memory is leaked by heap_tuple_fetch_attr.
In plpgsql proper, we can do the detoasting work in the "eval_mcontext".
Most of the code thrashing here is due to the need to add this capability
to expandedrecord.c as well as plpgsql proper. In expandedrecord.c,
we can't assume that the caller's context is short-lived, so make use of
the short-term sub-context that was already invented for checking domain
constraints. In view of this repurposing, it seems good to rename that
variable and associated code from "domain_check_cxt" to "short_term_cxt".
Peter Eisentraut and Tom Lane
Discussion: https://postgr.es/m/5AC06865.9050005@anastigmatix.net
canonicalize_ec_expression() is supposed to agree with coerce_type() as to
whether a RelabelType should be inserted to make a subexpression be valid
input for the operators of a given opclass. However, it did the wrong
thing with named-composite-type inputs to record_eq(): it put in a
RelabelType to RECORDOID, which the parser doesn't. In some cases this was
harmless because all code paths involving a particular equivalence class
did the same thing, but in other cases this would result in failing to
recognize a composite-type expression as being a member of an equivalence
class that it actually is a member of. The most obvious bad effect was to
fail to recognize that an index on a composite column could provide the
sort order needed for a mergejoin on that column, as reported by Teodor
Sigaev. I think there might be other, subtler, cases that result in
misoptimization. It also seems possible that an unwanted RelabelType
would sometimes get into an emitted plan --- but because record_eq and
friends don't examine the declared type of their input expressions, that
would not create any visible problems.
To fix, just treat RECORDOID as if it were a polymorphic type, which in
some sense it is. We might want to consider formalizing that a bit more
someday, but for the moment this seems to be the only place where an
IsPolymorphicType() test ought to include RECORDOID as well.
This has been broken for a long time, so back-patch to all supported
branches.
Discussion: https://postgr.es/m/a6b22369-e3bf-4d49-f59d-0c41d3551e81@sigaev.ru
Previously, we passed the toplevel PlannerInfo, but we actually want
to pass the relevant subroot. One problem with passing the toplevel
PlannerInfo is that the FDW which wants to push down an UPDATE or
DELETE against a join won't find the relevant joinrel there.
As of commit 1bc0100d27, postgres_fdw
tries to do exactly this and can be made to fail an assertion as a
result.
It's possible that this should be regarded as a bug fix and
back-patched to earlier releases, but for lack of a test case that
fails in earlier releases, no back-patch for now.
Etsuro Fujita, reviewed by Amit Langote.
Discussion: http://postgr.es/m/5AF43E02.30000@lab.ntt.co.jp
The impact of VARIADIC on the combine/serialize/deserialize support
functions of an aggregate wasn't thought through carefully. There is
actually no impact, because variadicity isn't passed through to these
functions (and it doesn't seem like it would need to be). However,
lookup_agg_function was mistakenly told to check things as though it were
passed through. The net result was that it was impossible to declare an
aggregate that had both VARIADIC input and parallelism support functions.
In passing, fix a runtime check in nodeAgg.c for the combine function's
strictness to make its error message agree with the creation-time check.
The previous message was actually backwards, and it doesn't seem like
there's a good reason to have two versions of this message text anyway.
Back-patch to 9.6 where parallel aggregation was introduced.
Alexey Bashtanov; message fix by me
Discussion: https://postgr.es/m/f86dde87-fef4-71eb-0480-62754aaca01b@imap.cc
Creating indexes on foreign tables is already forbidden, but local
partitioned indexes (commit 8b08f7d482) forgot to check for them. Add
a preliminary check to prevent wasting time.
Another school of thought says to allow the index to be created if it's
not a unique index; but it's possible to do better in the future (enable
indexing of foreign tables, somehow), so we avoid painting ourselves in
a corner by rejecting all cases, to avoid future grief (a.k.a. backward
incompatible changes).
Reported-by: Arseny Sher
Author: Amit Langote, Álvaro Herrera
Discussion: https://postgr.es/m/87sh71cakz.fsf@ars-thinkpad
- Change vacuum_cleanup_index_scale_factor GUC to PGC_USERSET.
vacuum_cleanup_index_scale_factor GUC was defined as PGC_SIGHUP. But this
GUC affects not only autovacuum. So it might be useful to change it from user
session in order to influence manually runned VACUUM.
- Add missing tab-complete support for vacuum_cleanup_index_scale_factor
reloption.
- Fix condition for B-tree index cleanup.
Zero value of vacuum_cleanup_index_scale_factor means that user wants B-tree
index cleanup to be never skipped.
- Documentation and comment improvements
Authors: Justin Pryzby, Alexander Korotkov, Liudmila Mantrova
Reviewed by: all authors and Robert Haas
Discussion: https://www.postgresql.org/message-id/flat/20180502023025.GD7631%40telsasoft.com
DST law changes in North Korea. Redefinition of "daylight savings" in
Ireland, as well as for some past years in Namibia and Czechoslovakia.
Additional historical corrections for Czechoslovakia.
With this change, the IANA database models Irish timekeeping as following
"standard time" in summer, and "daylight savings" in winter, so that the
daylight savings offset is one hour behind standard time not one hour
ahead. This does not change their UTC offset (+1:00 in summer, 0:00 in
winter) nor their timezone abbreviations (IST in summer, GMT in winter),
though now "IST" is more correctly read as "Irish Standard Time" not "Irish
Summer Time". However, the "is_dst" column in the pg_timezone_names view
will now be true in winter and false in summer for the Europe/Dublin zone.
Similar changes were made for Namibia between 1994 and 2017, and for
Czechoslovakia between 1946 and 1947.
So far as I can find, no Postgres internal logic cares about which way
tm_isdst is reported; in particular, since commit b2cbced9e we do not
rely on it to decide how to interpret ambiguous timestamps during DST
transitions. So I don't think this change will affect any Postgres
behavior other than the timezone-view outputs.
Discussion: https://postgr.es/m/30996.1525445902@sss.pgh.pa.us
match_clause_to_partition_key failed to consider COERCION_PATH_ARRAYCOERCE
cases in scalar-op-array expressions, so it was possible to crash the
server easily. To handle this case properly (ie. prune partitions) we
would need to run a bit of executor code during planning. Maybe it can
be improved, but for now let's just not crash. Add a test case that
used to trigger the crash.
Author: Michaël Paquier
match_clause_to_partition_key failed to indicate that operators that
don't have a commutator in a btree opclass are unsupported. It is
possible for this to cause a crash later if such an operator is used in
a scalar-op-array expression. Add a test case that used to the crash.
Author: Amit Langote
One caller of gen_partprune_steps_internal in
match_clause_to_partition_key was too optimistic about the former never
returning an empty step list. Rid it of its innocence. (Having fixed
the bug above, I no longer know how to exploit this, so no test case for
it, but it remained a bug.) Revise code flow a little bit, for
succintness.
Author: Álvaro Herrera
Reported-by: Marina Polyakova
Reviewed-by: Michaël Paquier
Reviewed-by: Amit Langote
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/ff8f9bfa485ff961d6bb43e54120485b@postgrespro.ru
The vertical tightness settings collapse vertical whitespace between
opening and closing brackets (parentheses, square brakets and braces).
This can make data structures in particular harder to read, and is not
very consistent with our style in non-Perl code. This patch restricts
that setting to parentheses only, and reformats all the perl code
accordingly. Not applying this to parentheses has some unfortunate
effects, so the consensus is to keep the setting for parentheses and not
for the others.
The diff for this patch does highlight some places where structures
should have trailing commas. They can be added manually, as there is no
automatic tool to do so.
Discussion: https://postgr.es/m/a2f2b87c-56be-c070-bfc0-36288b4b41c1@2ndQuadrant.com
There's no need to export this function, so don't. Michaël didn't
actually write the patch, but we list him as first author because with a
trivial one like this, intellectual authorship is as important (if not
more) as bit shovelling.
Author: Michaël Paquier, Amit Langote
Discussion: https://postgr.es/m/c91299c4-199b-0f16-339b-a29d6d2a39ee@lab.ntt.co.jp
The regexes used in 102_vacuumdb_stages.pl to check the postmaster log
for expected output contained several places with ".*.*", which is
underdetermined and can cause exponential runtime growth in Perl's regex
matcher (since it's not bright enough not to waste time seeing whether
different splits of the same substring would allow a match). We were
fortunate that the amount of text in the postmaster log was generally not
enough to make the runtime go to the moon; although commit 6271fceb8 had
been on the hairy edge of an obvious problem, thanks to its increasing the
default log verbosity to DEBUG1. Experimentation shows that anyone who
tried to run this test case with an even higher log verbosity would have
been in for serious pain. But even at default logging level, fixing this
saves several hundred ms on my workstation, more on slower buildfarm
members.
Remove the extra ".*"s, restoring more-or-less-linear matching speed.
Back-patch to 9.4 where the test case was added, mostly in case anyone
tries to do related debugging in a back branch.
Discussion: https://postgr.es/m/32459.1525657786@sss.pgh.pa.us
While poking into initdb's performance, I noticed that this query
wasn't being done very intelligently. By forcing it to execute
obj_description() for each pg_proc/pg_operator join row, we were
essentially setting up a nestloop join to pg_description, which
is not a bright query plan when there are hundreds of outer rows.
Convert the check for a "deprecated" operator into a NOT EXISTS
so that it can be done as a hashed antijoin. On my workstation
this reduces the time for this query from ~ 35ms to ~ 10ms.
Which is not a huge win, but it adds up over buildfarm runs.
In passing, insert forced query breaks (\n\n, in single-user mode)
after each SQL-query file that initdb sources, and after some
relatively new queries in setup_privileges(). This doesn't make
a lot of difference normally, but it will result in briefer, saner
error messages if anything goes wrong.
Brown-paper-bag bug in commit 7c91a0364: when we rearranged the placement
of "reltuples += 1" statements, we missed including one in this code path.
The net effect of that was that CREATE INDEX CONCURRENTLY would set the
table's pg_class.reltuples to zero, as would index builds done during
bootstrap mode. (It seems like parallel index builds ought to fail
similarly, but they don't, perhaps because reltuples is computed in some
other way. You certainly couldn't figure that out from the abysmally
underdocumented parallelism code in this area.)
I was led to this by wondering why initdb seemed to have slowed down as
a result of 7c91a0364, as is evident in the buildfarm's timing history.
The reason is that every system catalog with indexes had pg_class.reltuples
= 0 after bootstrap, causing the planner to make some terrible choices for
queries in the post-bootstrap steps. On my workstation, this fix causes
the runtime of "initdb -N" to drop from ~2.0 sec to ~1.4 sec, which is
almost though not quite back to where it was in v10. That's not much of
a deal for production use perhaps, but it makes a noticeable difference
for buildfarm and "make check-world" runs, which do a lot of initdbs.
In Catalog.pm, mark eval of a string instead of a block as allowed.
Disallow perlcritic completely in Gen_dummy_probes.pl, as it's
generated code.
Protect a couple of lines in plperl code from perltidy, so that the
annotation for perlcritic stays on the same line as the construct it
would otherwise object to.
Commit 6271fceb8 changed PostgresNode.pm to force log_min_messages = debug1
in all TAP tests, without any discussion and without a concrete need for
it. This makes some of the TAP tests noticeably slower (although much of
that may be due to poorly-written regexes), and for certain it's bloating
the buildfarm logs. Revert the change.
Discussion: https://postgr.es/m/32459.1525657786@sss.pgh.pa.us
Commit 86f575948 already manually updated the oidjoins test for the
new pg_constraint.conparentid => pg_constraint.oid relationship, but
failed to update findoidjoins/README, thus the apparent inconsistency
here.
Michael Paquier
Discussion: https://postgr.es/m/20180507001811.GA27389@paquier.xyz
Most versions of "dtrace -h" drop const qualifiers from the declarations
of probe functions (though macOS gets it right). This causes compiler
warnings when we pass in pointers to const. Repair by extending our
existing post-processing of the probes.h file. To do so, assume that all
"char *" arguments should be "const char *"; that seems reasonably safe.
Thomas Munro
Discussion: https://postgr.es/m/CAEepm=2j1pWSruQJqJ91ZDzD8w9ZZDsM4j2C6x75C-VryWg-_w@mail.gmail.com
Mark Dilger pointed out that the bootstrap parser does not allow
any of its keywords to appear as column values unless they're quoted,
and proposed dealing with that by quoting such values in genbki.pl.
Looking closer, though, we also have that problem with respect to table,
column, and type names appearing in the .bki file: the parser would fail
if any of those matched any of its keywords. While so far there have
been no conflicts (that I've heard of), this seems like a booby trap
waiting to catch somebody. Rather than clutter genbki.pl with enough
quoting logic to handle all that, let's make the bootstrap parser grow
up a little bit and treat its keywords as unreserved.
Experimentation shows that it's fairly easy to do so with the exception
of _null_, which I don't have a big problem with keeping as a reserved
word. The only change needed is that we can't have the "close" command
take an optional table name: it has to either require or forbid the
table name to avoid shift/reduce conflicts. genbki.pl has historically
always included the table name, so I took that option.
The implementation has bootscanner.l passing forward the string value
of each keyword, in case bootparse.y needs that. This avoids needing to
know the precise spelling of each keyword in bootparse.y, which is good
because that's not always obvious from the token name.
Discussion: https://postgr.es/m/3024FC91-DB6D-4732-B31C-DF772DF039A0@gmail.com
This reverts commit 55e0e45817.
It's served its purpose of demonstrating what was wrong on
buildfarm member opossum. We could consider putting some kind
of single-purpose hack into ftod() to make the test pass there;
but I don't think it's worth the trouble, since there are surely
many other places whether this platform bug could manifest.
In commit 8b29e88cd, I'd dithered about whether to make
in_range_float4_float8 be a standalone copy of the float in-range logic
or have it punt to in_range_float8_float8. I went with the latter, which
saves code space though at the cost of performance and readability.
However, it emerges that this tickles a compiler or hardware bug on
buildfarm member opossum. Test results from commit 55e0e4581 show
conclusively that widening a float4 NaN to float8 produces Inf, not NaN,
on that machine; which accounts perfectly for the window RANGE test
failures it's been showing. We can dodge this problem by making
in_range_float4_float8 be an independent function, so that it checks
for NaN inputs before widening them.
Ordinarily I'd not be very excited about working around such obviously
broken functionality; but given that this was a judgment call to begin
with, I don't mind reversing it.
If a continuation record is split so that its first half has already been
removed from the master, and is only present in pg_wal, and there is a
recycled WAL segment in the standby server that looks like it would
contain the second half, recovery would get stuck. The code in
XLogPageRead() incorrectly started streaming at the beginning of the
WAL record, even if we had already read the first page.
Backpatch to 9.4. In principle, older versions have the same problem, but
without replication slots, there was no straightforward mechanism to
prevent the master from recycling old WAL that was still needed by standby.
Without such a mechanism, I think it's reasonable to assume that there's
enough slack in how many old segments are kept around to not run into this,
or you have a WAL archive.
Reported by Jonathon Nelson. Analysis and patch by Kyotaro HORIGUCHI, with
some extra comments by me.
Discussion: https://www.postgresql.org/message-id/CACJqAM3xVz0JY1XFDKPP%2BJoJAjoGx%3DGNuOAshEDWCext7BFvCQ%40mail.gmail.com