Commit fc7695891 changed CheckAttributeType to recurse into ranges,
but made it pass down the wrong collation (always InvalidOid, since
ranges as such have no collation). This would result in guaranteed
failure when considering a range type whose subtype is collatable.
Embarrassingly, we lack any regression tests that would expose such
a problem (but fortunately, somebody noticed before we shipped this
bug in any release).
Fix it to pass down the range's subtype collation property instead,
and add some regression test cases to exercise collatable-subtype
ranges a bit more. Back-patch to all supported branches, as the
previous patch was.
Report and patch by Julien Rouhaud, test cases tweaked by me
Discussion: https://postgr.es/m/CAOBaU_aBWqNweiGUFX0guzBKkcfJ8mnnyyGC_KBQmO12Mj5f_A@mail.gmail.com
I had supposed that the from_char_seq_search() call sites were
all passing the constant arrays you'd expect them to pass ...
but on looking closer, the one for DY format was passing the
days[] array not days_short[]. This accidentally worked because
the day abbreviations in English are all the same as the first
three letters of the full day names. However, once we took out
the "maximum comparison length" logic, it stopped working.
As penance for that oversight, add regression test cases covering
this, as well as every other switch case in DCH_from_char() that
was not reached according to the code coverage report.
Also, fold the DCH_RM and DCH_rm cases into one --- now that
seq_search is case independent, there's no need to pass different
comparison arrays for those cases.
Back-patch, as the previous commit was.
seq_search(), which is used to match input substrings to constants
such as month and day names, had a lot of bizarre and unnecessary
behaviors. It was mostly possible to avert our eyes from that before,
but we don't want to duplicate those behaviors in the upcoming patch
to allow recognition of non-English month and day names. So it's time
to clean this up. In particular:
* seq_search scribbled on the input string, which is a pretty dangerous
thing to do, especially in the badly underdocumented way it was done here.
Fortunately the input string is a temporary copy, but that was being made
three subroutine levels away, making it something easy to break
accidentally. The behavior is externally visible nonetheless, in the form
of odd case-folding in error reports about unrecognized month/day names.
The scribbling is evidently being done to save a few calls to pg_tolower,
but that's such a cheap function (at least for ASCII data) that it's
pretty pointless to worry about. In HEAD I switched it to be
pg_ascii_tolower to ensure it is cheap in all cases; but there are corner
cases in Turkish where this'd change behavior, so leave it as pg_tolower
in the back branches.
* seq_search insisted on knowing the case form (all-upper, all-lower,
or initcap) of the constant strings, so that it didn't have to case-fold
them to perform case-insensitive comparisons. This likewise seems like
excessive micro-optimization, given that pg_tolower is certainly very
cheap for ASCII data. It seems unsafe to assume that we know the case
form that will come out of pg_locale.c for localized month/day names, so
it's better just to define the comparison rule as "downcase all strings
before comparing". (The choice between downcasing and upcasing is
arbitrary so far as English is concerned, but it might not be in other
locales, so follow citext's lead here.)
* seq_search also had a parameter that'd cause it to report a match
after a maximum number of characters, even if the constant string were
longer than that. This was not actually used because no caller passed
a value small enough to cut off a comparison. Replicating that behavior
for localized month/day names seems expensive as well as useless, so
let's get rid of that too.
* from_char_seq_search used the maximum-length parameter to truncate
the input string in error reports about not finding a matching name.
This leads to rather confusing reports in many cases. Worse, it is
outright dangerous if the input string isn't all-ASCII, because we
risk truncating the string in the middle of a multibyte character.
That'd lead either to delivering an illegible error message to the
client, or to encoding-conversion failures that obscure the actual
data problem. Get rid of that in favor of truncating at whitespace
if any (a suggestion due to Alvaro Herrera).
In addition to fixing these things, I const-ified the input string
pointers of DCH_from_char and its subroutines, to make sure there
aren't any other scribbling-on-input problems.
The risk of generating a badly-encoded error message seems like
enough of a bug to justify back-patching, so patch all supported
branches.
Discussion: https://postgr.es/m/29432.1579731087@sss.pgh.pa.us
Attempting to use CREATE INDEX, DROP INDEX or REINDEX with CONCURRENTLY
on a temporary relation with ON COMMIT actions triggered unexpected
errors because those operations use multiple transactions internally to
complete their work. Here is for example one confusing error when using
ON COMMIT DELETE ROWS:
ERROR: index "foo" already contains data
Issues related to temporary relations and concurrent indexing are fixed
in this commit by enforcing the non-concurrent path to be taken for
temporary relations even if using CONCURRENTLY, transparently to the
user. Using a non-concurrent path does not matter in practice as locks
cannot be taken on a temporary relation by a session different than the
one owning the relation, and the non-concurrent operation is more
effective.
The problem exists with REINDEX since v12 with the introduction of
CONCURRENTLY, and with CREATE/DROP INDEX since CONCURRENTLY exists for
those commands. In all supported versions, this caused only confusing
error messages to be generated. Note that with REINDEX, it was also
possible to issue a REINDEX CONCURRENTLY for a temporary relation owned
by a different session, leading to a server crash.
The idea to enforce transparently the non-concurrent code path for
temporary relations comes originally from Andres Freund.
Reported-by: Manuel Rigger
Author: Michael Paquier, Heikki Linnakangas
Reviewed-by: Andres Freund, Álvaro Herrera, Heikki Linnakangas
Discussion: https://postgr.es/m/CA+u7OA6gP7YAeCguyseusYcc=uR8+ypjCcgDDCTzjQ+k6S9ksQ@mail.gmail.com
Backpatch-through: 9.4
When estimating the selectivity of "range_var <@ range_constant" or
"range_var @> range_constant", if the upper (or respectively lower)
bound of the range_constant was above the last bin of the range_var's
histogram, the code would access uninitialized memory and potentially
crash (though it seems the probability of a crash is quite low).
Handle the endpoint cases explicitly to fix that.
While at it, be more paranoid about the possibility of getting NaN
or other silly results from the range type's subdiff function.
And improve some comments.
Ordinarily we'd probably add a regression test case demonstrating
the bug in unpatched code. But it's too hard to get it to crash
reliably because of the uninitialized-memory dependence, so skip that.
Per bug #16122 from Adam Scott. It's been broken from the beginning,
apparently, so backpatch to all supported branches.
Diagnosis by Michael Paquier, patch by Andrey Borodin and Tom Lane.
Discussion: https://postgr.es/m/16122-eb35bc248c806c15@postgresql.org
The DTK_DOW/DTK_ISODOW and DTK_DOY switch cases in timestamp_part() and
timestamptz_part() contained calls of timestamp2tm() that were fully
redundant with the ones done just above the switch. This evidently crept
in during commit 258ee1b63, which relocated that code from another place
where the calls were indeed needed. Just delete the redundant calls.
I (tgl) noted that our test coverage of these functions left quite a
bit to be desired, so extend timestamp.sql and timestamptz.sql to
cover all the branches.
Back-patch to all supported branches, as the previous commit was.
There's no real issue here other than some wasted cycles in some
not-too-heavily-used code paths, but the test coverage seems valuable.
Report and patch by Li Japin; test case adjustments by me.
Discussion: https://postgr.es/m/SG2PR06MB37762CAE45DB0F6CA7001EA9B6550@SG2PR06MB3776.apcprd06.prod.outlook.com
We implement ON COMMIT DELETE ROWS by truncating tables marked that
way, which requires also truncating/rebuilding their indexes. But
RelationTruncateIndexes asks the relcache for up-to-date copies of any
index expressions, which may cause execution of eval_const_expressions
on them, which can result in actual execution of subexpressions.
This is a bad thing to have happening during ON COMMIT. Manuel Rigger
reported that use of a SQL function resulted in crashes due to
expectations that ActiveSnapshot would be set, which it isn't.
The most obvious fix perhaps would be to push a snapshot during
PreCommit_on_commit_actions, but I think that would just open the door
to more problems: CommitTransaction explicitly expects that no
user-defined code can be running at this point.
Fortunately, since we know that no tuples exist to be indexed, there
seems no need to use the real index expressions or predicates during
RelationTruncateIndexes. We can set up dummy index expressions
instead (we do need something that will expose the right data type,
as there are places that build index tupdescs based on this), and
just ignore predicates and exclusion constraints.
In a green field it'd likely be better to reimplement ON COMMIT DELETE
ROWS using the same "init fork" infrastructure used for unlogged
relations. That seems impractical without catalog changes though,
and even without that it'd be too big a change to back-patch.
So for now do it like this.
Per private report from Manuel Rigger. This has been broken forever,
so back-patch to all supported branches.
While a self-referential view doesn't actually work, it's possible
to create one, and it turns out that this breaks some of the
information_schema views. Those views call relation_is_updatable(),
which neglected to consider the hazards of being recursive. In
older PG versions you get a "stack depth limit exceeded" error,
but since v10 it'd recurse to the point of stack overrun and crash,
because commit a4c35ea1c took out the expression_returns_set() call
that was incidentally checking the stack depth.
Since this function is only used by information_schema views, it
seems like it'd be better to return "not updatable" than suffer
an error. Hence, add tracking of what views we're examining,
in just the same way that the nearby fireRIRrules() code detects
self-referential views. I added a check_stack_depth() call too,
just to be defensive.
Per private report from Manuel Rigger. Back-patch to all
supported versions.
It turns out that commit e9f1c01b7 missed a case: we must print a
VALUES clause in long format if get_query_def is given a resultDesc
that would require the query's output column name(s) to be different
from what the bare VALUES clause would produce.
This applies in case an ALTER ... RENAME COLUMN has been done to
a view that formerly could be printed in simple format, as shown
in the added regression test case. It also explains bug #16119
from Dmitry Telpt, because it turns out that (unlike CREATE VIEW)
CREATE MATERIALIZED VIEW fails to apply any column aliases it's
given to the stored ON SELECT rule. So to get them to be printed,
we have to account for the resultDesc renaming. It might be worth
changing the matview code so that it creates the ON SELECT rule
with the correct aliases; but we'd still need these messy checks in
get_simple_values_rte to handle the case of a subsequent column
rename, so any such change would be just neatnik-ism not a bug fix.
Like the previous patch, back-patch to all supported branches.
Discussion: https://postgr.es/m/16119-e64823f30a45a754@postgresql.org
This patch adopts the overflow check logic introduced by commit cbdb8b4c0
into two more places. interval_mul() failed to notice if it computed a
new microseconds value that was one more than INT64_MAX, and pgbench's
double-to-int64 logic had the same sorts of edge-case problems that
cbdb8b4c0 fixed in the core code.
To make this easier to get right in future, put the guts of the checks
into new macros in c.h, and add commentary about how to use the macros
correctly.
Back-patch to all supported branches, as we did with the previous fix.
Yuya Watari
Discussion: https://postgr.es/m/CAJ2pMkbkkFw2hb9Qb1Zj8d06EhWAQXFLy73St4qWv6aX=vqnjw@mail.gmail.com
Rearrange the logic in record_image_cmp() and record_image_eq() to
error out on unexpected typlens (either not supported there or
completely invalid due to corruption). Barring corruption, this is
not possible today but it seems more future-proof and robust to fix
this.
Reported-by: Peter Geoghegan <pg@bowt.ie>
Otherwise it can be hard to see where an error is coming from, when
the parallel worker sets all the GUCs that it received from the
leader. Bug #15726. Back-patch to 9.5, where RestoreGUCState()
appeared.
Reported-by: Tiago Anastacio
Reviewed-by: Daniel Gustafsson, Tom Lane
Discussion: https://postgr.es/m/15726-6d67e4fa14f027b3%40postgresql.org
Commit 5ac0d9360 failed to entirely fix bitshiftright's habit of
leaving one-bits in the pad space that should be all zeroes,
because in a moment of sheer brain fade I'd concluded that only
the code path used for not-a-multiple-of-8 shift distances needed
to be fixed. Of course, a multiple-of-8 shift distance can also
cause the problem, so we need to forcibly zero the extra bits
in both cases.
Per bug #16037 from Alexander Lakhin. As before, back-patch to all
supported branches.
Discussion: https://postgr.es/m/16037-1d1ebca564db54f4@postgresql.org
Encoding conversion uses the very simplistic rule that the output
can't be more than 4X longer than the input, and palloc's a buffer
of that size. This results in failure to convert any string longer
than 1/4 GB, which is becoming an annoying limitation.
As a band-aid to improve matters, allow the allocated output buffer
size to exceed 1GB. We still insist that the final result fit into
MaxAllocSize (1GB), though. Perhaps it'd be safe to relax that
restriction, but it'd require close analysis of all callers, which
is daunting (not least because external modules might call these
functions). For the moment, this should allow a 2X to 4X improvement
in the longest string we can convert, which is a useful gain in
return for quite a simple patch.
Also, once we have successfully converted a long string, repalloc
the output down to the actual string length, returning the excess
to the malloc pool. This seems worth doing since we can usually
expect to give back several MB if we take this path at all.
This still leaves much to be desired, most notably that the assumption
that MAX_CONVERSION_GROWTH == 4 is very fragile, and yet we have no
guard code verifying that the output buffer isn't overrun. Fixing
that would require significant changes in the encoding conversion
APIs, so it'll have to wait for some other day.
The present patch seems safely back-patchable, so patch all supported
branches.
Alvaro Herrera and Tom Lane
Discussion: https://postgr.es/m/20190816181418.GA898@alvherre.pgsql
Discussion: https://postgr.es/m/3614.1569359690@sss.pgh.pa.us
Up to now, if you resized a large (>8K) palloc chunk down to a smaller
size, aset.c made no attempt to return any space to the malloc pool.
That's unpleasant if a really large allocation is resized to a
significantly smaller size. I think no such cases existed when this
code was designed, and I'm not sure whether they're common even yet,
but an upcoming fix to encoding conversion will certainly create such
cases. Therefore, fix AllocSetRealloc so that it gives realloc()
a chance to do something with the block. This doesn't noticeably
increase complexity, we mostly just have to change the order in which
the cases are considered.
Back-patch to all supported branches.
Discussion: https://postgr.es/m/20190816181418.GA898@alvherre.pgsql
Discussion: https://postgr.es/m/3614.1569359690@sss.pgh.pa.us
If the bitstring length is not a multiple of 8, we'd shift the
rightmost bits into the pad space, which must be zeroes --- bit_cmp,
for one, depends on that. This'd lead to the result failing to
compare equal to what it should compare equal to, as reported in
bug #16013 from Daryl Waycott.
This is, if memory serves, not the first such bug in the bitstring
functions. In hopes of making it the last one, do a bit more work
than minimally necessary to fix the bug:
* Add assertion checks to bit_out() and varbit_out() to complain if
they are given incorrectly-padded input. This will improve the
odds that manual testing of any new patch finds problems.
* Encapsulate the padding-related logic in macros to make it
easier to use.
Also, remove unnecessary padding logic from bit_or() and bitxor().
Somebody had already noted that we need not re-pad the result of
bit_and() since the inputs are required to be the same length,
but failed to extrapolate that to the other two.
Also, move a comment block that once was near the head of varbit.c
(but people kept putting other stuff in front of it), to put it in
the header block.
Note for the release notes: if anyone has inconsistent data as a
result of saving the output of bitshiftright() in a table, it's
possible to fix it with something like
UPDATE mytab SET bitcol = ~(~bitcol) WHERE bitcol != ~(~bitcol);
This has been broken since day one, so back-patch to all supported
branches.
Discussion: https://postgr.es/m/16013-c2765b6996aacae9@postgresql.org
An empty file name or subdirectory name leads join_path_components() to
just produce the parent directory name, which leads to weird failures or
recursive inclusions. Let's throw a specific error for that. It takes
only slightly more code to detect all-blank names, so do so.
Also, detect direct recursion, ie a file calling itself. As coded
this will also detect recursion via "include_dir '.'", which is
perhaps more likely than explicitly including the file itself.
Detecting indirect recursion would require API changes for guc-file.l
functions, which seems not worth it since extensions might call them.
The nesting depth limit will catch such cases eventually, just not
with such an on-point error message.
In passing, adjust the example usages in postgresql.conf.sample
to perhaps eliminate the problem at the source: there's no reason
for the examples to suggest that an empty value is valid.
Per a trouble report from Brent Bates. Back-patch to 9.5; the
issue is old, but the code in 9.4 is enough different that the
patch doesn't apply easily, and it doesn't seem worth the trouble
to fix there.
Ian Barwick and Tom Lane
Discussion: https://postgr.es/m/8c8bcbca-3bd9-dc6e-8986-04a5abdef142@2ndquadrant.com
ALTER SYSTEM itself normally won't make duplicate entries (although
up till this patch, it was possible to confuse it by writing case
variants of a GUC's name). However, if some external tool has appended
entries to the file, that could result in duplicate entries for a single
GUC name. In such a situation, ALTER SYSTEM did exactly the wrong thing,
because it replaced or removed only the first matching entry, leaving
the later one(s) still there and hence still determining the active value.
This patch fixes that by making ALTER SYSTEM sweep through the file and
remove all matching entries, then (if not ALTER SYSTEM RESET) append the
new setting to the end. This means entries will be in order of last
setting rather than first setting, but that shouldn't hurt anything.
Also, make the comparisons case-insensitive so that the right things
happen if you do, say, ALTER SYSTEM SET "TimeZone" = 'whatever'.
This has been broken since ALTER SYSTEM was invented, so back-patch
to all supported branches.
Ian Barwick, with minor mods by me
Discussion: https://postgr.es/m/aed6cc9f-98f3-2693-ac81-52bb0052307e@2ndquadrant.com
When parsing a timetz string with a dynamic timezone abbreviation or a
timezone not specified, it was possible to generate incorrect timestamps
based on a date which uses some non-initialized variables if the input
string did not specify fully a date to parse. This is already checked
when a full timezone spec is included in the input string, but the two
other cases mentioned above missed the same checks.
This gets fixed by generating an error as this input is invalid, or in
short when a date is not fully specified.
Valgrind was complaining about this problem.
Bug: #15910
Author: Alexander Lakhin
Discussion: https://postgr.es/m/15910-2eba5106b9aa0c61@postgresql.org
Backpatch-through: 9.4
Commit aa27977fe2 introduced this
restriction for pg_temp.function_name(arg); do likewise for types
created in temporary schemas. Programs that this breaks should add
"pg_temp." schema qualification or switch to arg::type_name syntax.
Back-patch to 9.4 (all supported versions).
Reviewed by Tom Lane. Reported by Tom Lane.
Security: CVE-2019-10208
pg_timezone_names() tries to avoid showing the "Factory" zone in
the view, mainly because that has traditionally had a very long
"abbreviation" such as "Local time zone must be set--see zic manual page",
so that showing it messes up psql's formatting of the whole view.
Since tzdb version 2016g, IANA instead uses the abbreviation "-00",
which is sane enough that there's no reason to discriminate against it.
On the other hand, it emerges that FreeBSD and possibly other packagers
are so wedded to backwards compatibility that they hack the IANA data
to keep the old spelling --- and not just that old spelling, but even
older spellings that IANA used back in the stone age. This caused the
filter logic to fail to suppress "Factory" at all on such platforms,
though the formatting problem is definitely real in that case.
To solve both problems, get rid of the hard-wired assumption about
exactly what Factory's abbreviation is, and instead reject abbreviations
exceeding 31 characters. This will allow Factory to appear in the view
if and only if it's using the modern abbreviation.
In passing, simplify the code we add to zic.c to support "zic -P"
to remove its now-obsolete hacks to not print the Factory zone's
abbreviation. Unlike pg_timezone_names(), there's no reason for
that code to support old/nonstandard timezone data.
Since we generally prefer to keep timezone-related behavior the
same in all branches, and since this is arguably a bug fix,
back-patch to all supported branches.
Discussion: https://postgr.es/m/3961.1564086915@sss.pgh.pa.us
Money values exceeding about 18 digits (depending on lc_monetary)
could be inaccurately converted to numeric, due to select_div_scale()
deciding it didn't need to compute any fractional digits. Force
its hand by setting the dscale of one division input to equal the
number of fractional digits we need.
In passing, rearrange the logic to not do useless work in locales
where money values are considered integral.
Per bug #15925 from Slawomir Chodnicki. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/15925-da9953e2674bb5c8@postgresql.org
The values 'infinity' and '-infinity' are a part of the DATE type
itself, so a bound of the date 'infinity' is not the same as an
unbounded/infinite range. However, it is still wrong to try to
canonicalize such values, because adding or subtracting one has no
effect. Fix by treating 'infinity' and '-infinity' the same as
unbounded ranges for the purposes of canonicalization (but not other
purposes).
Backpatch to all versions because it is inconsistent with the
documented behavior. Note that this could be an incompatibility for
applications relying on the behavior contrary to the documentation.
Author: Laurenz Albe
Reviewed-by: Thomas Munro
Discussion: https://postgr.es/m/77f24ea19ab802bc9bc60ddbb8977ee2d646aec1.camel%40cybertec.at
Backpatch-through: 9.4
UBSan complains about this. Instead, cast to a suitable type requiring
only 4-byte alignment. DatumGetAnyArrayP() already assumes one can cast
between AnyArrayType and ArrayType, so this doesn't introduce a new
assumption. Back-patch to 9.5, where AnyArrayType was introduced.
Reviewed by Tom Lane.
Discussion: https://postgr.es/m/20190629210334.GA1244217@rfd.leadboat.com
Given a query in which multiple JOIN nodes used the same alias
(which'd necessarily be in different sub-SELECTs), ruleutils.c
would assign the JOIN nodes distinct aliases for clarity ...
but then it forgot to print the modified aliases when dumping
the JOIN nodes themselves. This results in a dump/reload hazard
for views, because the emitted query is flat-out incorrect:
Vars will be printed with table names that have no referent.
This has been wrong for a long time, so back-patch to all supported
branches.
Philip Dubé
Discussion: https://postgr.es/m/CY4PR2101MB080246F2955FF58A6ED1FEAC98140@CY4PR2101MB0802.namprd21.prod.outlook.com
Vignesh found this bug in the check function for
default_table_access_method's check hook, but that was just copied
from older GUCs. Investigation by Michael and me then found the bug in
further places.
When not connected to a database (e.g. in a walsender connection), we
cannot perform (most) GUC checks that need database access. Even when
only shared tables are needed, unless they're
nailed (c.f. RelationCacheInitializePhase2()), they cannot be accessed
without pg_class etc. being present.
Fix by extending the existing IsTransactionState() checks to also
check for MyDatabaseOid.
Reported-By: Vignesh C, Michael Paquier, Andres Freund
Author: Vignesh C, Andres Freund
Discussion: https://postgr.es/m/CALDaNm1KXK9gbZfY-p_peRFm_XrBh1OwQO1Kk6Gig0c0fVZ2uw%40mail.gmail.com
Backpatch: 9.4-
The function had been interpreting SQL_ASCII messages as UTF8, throwing
an error when they were invalid UTF8. The new behavior is consistent
with pg_do_encoding_conversion(). This affects LOG_DESTINATION_STDERR
and LOG_DESTINATION_EVENTLOG, which will send untranslated bytes to
write() and ReportEventA(). On buildfarm member bowerbird, enabling
log_connections caused an error whenever the role name was not valid
UTF8. Back-patch to 9.4 (all supported versions).
Discussion: https://postgr.es/m/20190512015615.GD1124997@rfd.leadboat.com
In examine_variable() and examine_simple_variable(), when checking the
user's table and column privileges to determine whether to grant
access to the pg_statistic data, use checkAsUser for the privilege
checks, if it's set. This will be the case if we're accessing the
table via a view, to indicate that we should perform privilege checks
as the view owner rather than the current user.
This change makes this planner check consistent with the check in the
executor, so the planner will be able to make use of statistics if the
table is accessible via the view. This fixes a performance regression
introduced by commit e2d4ef8de8, which affects queries against
non-security barrier views in the case where the user doesn't have
privileges on the underlying table, but the view owner does.
Note that it continues to provide the same safeguards controlling
access to pg_statistic for direct table access (in which case
checkAsUser won't be set) and for security barrier views, because of
the nearby checks on rte->security_barrier and rte->securityQuals.
Back-patch to all supported branches because e2d4ef8de8 was.
Dean Rasheed, reviewed by Jonathan Katz and Stephen Frost.
In commit e2d4ef8de8, security checks were added to prevent
user-supplied operators from running over data from pg_statistic
unless the user has table or column privileges on the table, or the
operator is leakproof. For a table with RLS, however, checking for
table or column privileges is insufficient, since that does not
guarantee that the user has permission to view all of the column's
data.
Fix this by also checking for securityQuals on the RTE, and insisting
that the operator be leakproof if there are any. Thus the
leakproofness check will only be skipped if there are no securityQuals
and the user has table or column privileges on the table -- i.e., only
if we know that the user has access to all the data in the column.
Back-patch to 9.5 where RLS was added.
Dean Rasheed, reviewed by Jonathan Katz and Stephen Frost.
Security: CVE-2019-10130
Commits 3dbb317d3 et al failed under CLOBBER_CACHE_ALWAYS testing.
Investigation showed that to reindex pg_class_oid_index, we must
suppress accesses to the index (via SetReindexProcessing) before we call
RelationSetNewRelfilenode, or at least before we do CommandCounterIncrement
therein; otherwise, relcache reloads happening within the CCI may try to
fetch pg_class rows using the index's new relfilenode value, which is as
yet an empty file.
Of course, the point of 3dbb317d3 was that that ordering didn't work
either, because then RelationSetNewRelfilenode's own update of the index's
pg_class row cannot access the index, should it need to.
There are various ways we might have got around that, but Andres Freund
came up with a brilliant solution: for a mapped index, we can really just
skip the pg_class update altogether. The only fields it was actually
changing were relpages etc, but it was just setting them to zeroes which
is useless make-work. (Correct new values will be installed at the end
of index build.) All pg_class indexes are mapped and probably always will
be, so this eliminates the problem by removing work rather than adding it,
always a pleasant outcome. Having taught RelationSetNewRelfilenode to do
it that way, we can revert the code reordering in reindex_index. (But
I left the moved setup code where it was; there seems no reason why it
has to run without use of the old index. If you're trying to fix a
busted pg_class index, you'll have had to disable system index use
altogether to get this far.)
Moreover, this means we don't need RelationSetIndexList at all, because
reindex_relation's hacking to make "REINDEX TABLE pg_class" work is
likewise now unnecessary. We'll leave that code in place in the back
branches, but a follow-on patch will remove it in HEAD.
In passing, do some minor cleanup for commit 5c1560606 (in HEAD only),
notably removing a duplicate newrnode assignment.
Patch by me, using a core idea due to Andres Freund. Back-patch to all
supported branches, as 3dbb317d3 was.
Discussion: https://postgr.es/m/28926.1556664156@sss.pgh.pa.us
cache_locale_time (extraction of LC_TIME-related info) had never been
taught the lessons we previously learned about extraction of info related
to LC_MONETARY and LC_NUMERIC. Specifically, commit 95a777c61 taught
PGLC_localeconv() that data coming out of localeconv() was in an encoding
determined by the relevant locale, but we didn't realize that there's a
similar issue with strftime(). And commit a4930e7ca hardened
PGLC_localeconv() against errors occurring partway through, but failed
to do likewise for cache_locale_time(). So, rearrange the latter
function to perform encoding conversion and not risk failure while
it's got the locales set to temporary values.
This time around I also changed PGLC_localeconv() to treat it as FATAL
if it can't restore the previous settings of the locale values. There
is no reason (except possibly OOM) for that to fail, and proceeding with
the wrong locale values seems like a seriously bad idea --- especially
on Windows where we have to also temporarily change LC_CTYPE. Also,
protect against the possibility that we can't identify the codeset
reported for LC_MONETARY or LC_NUMERIC; rather than just failing,
try to validate the data without conversion.
The user-visible symptom this fixes is that if LC_TIME is set to a locale
name that implies an encoding different from the database encoding,
non-ASCII localized day and month names would be retrieved in the wrong
encoding, leading to either unexpected encoding-conversion error reports
or wrong output from to_char(). The other possible failure modes are
unlikely enough that we've not seen reports of them, AFAIK.
The encoding conversion problems do not manifest on Windows, since
we'd already created special-case code to handle that issue there.
Per report from Juan José Santamaría Flecha. Back-patch to all
supported versions.
Juan José Santamaría Flecha and Tom Lane
Discussion: https://postgr.es/m/CAC+AXB22So5aZm2vZe+MChYXec7gWfr-n-SK-iO091R0P_1Tew@mail.gmail.com
Previously, include actions include_dir, include_if_exists, and include
listed commented-out values which were not the defaults, which is
inconsistent with other entries. Instead, replace them with '', which
is the default value.
Reported-by: Emanuel Araújo
Discussion: https://postgr.es/m/CAMuTAkYMx6Q27wpELDR3_v9aG443y7ZjeXu15_+1nGUjhMWOJA@mail.gmail.com
Backpatch-through: 9.4
postmaster startup scrutinizes any shared memory segment recorded in
postmaster.pid, exiting if that segment matches the current data
directory and has an attached process. When the postmaster.pid file was
missing, a starting postmaster used weaker checks. Change to use the
same checks in both scenarios. This increases the chance of a startup
failure, in lieu of data corruption, if the DBA does "kill -9 `head -n1
postmaster.pid` && rm postmaster.pid && pg_ctl -w start". A postmaster
will no longer stop if shmat() of an old segment fails with EACCES. A
postmaster will no longer recycle segments pertaining to other data
directories. That's good for production, but it's bad for integration
tests that crash a postmaster and immediately delete its data directory.
Such a test now leaks a segment indefinitely. No "make check-world"
test does that. win32_shmem.c already avoided all these problems. In
9.6 and later, enhance PostgresNode to facilitate testing. Back-patch
to 9.4 (all supported versions).
Reviewed (in earlier versions) by Daniel Gustafsson and Kyotaro HORIGUCHI.
Discussion: https://postgr.es/m/20190408064141.GA2016666@rfd.leadboat.com
postmaster startup scrutinizes any shared memory segment recorded in
postmaster.pid, exiting if that segment matches the current data
directory and has an attached process. When the postmaster.pid file was
missing, a starting postmaster used weaker checks. Change to use the
same checks in both scenarios. This increases the chance of a startup
failure, in lieu of data corruption, if the DBA does "kill -9 `head -n1
postmaster.pid` && rm postmaster.pid && pg_ctl -w start". A postmaster
will no longer recycle segments pertaining to other data directories.
That's good for production, but it's bad for integration tests that
crash a postmaster and immediately delete its data directory. Such a
test now leaks a segment indefinitely. No "make check-world" test does
that. win32_shmem.c already avoided all these problems. In 9.6 and
later, enhance PostgresNode to facilitate testing. Back-patch to 9.4
(all supported versions).
Reviewed by Daniel Gustafsson and Kyotaro HORIGUCHI.
Discussion: https://postgr.es/m/20130911033341.GD225735@tornado.leadboat.com
One should almost always terminate an old process, not use a manual
removal tool like ipcrm. Removal of the ipcclean script eleven years
ago (39627b1ae6) and its non-replacement
corroborate that manual shm removal is now a niche goal. Back-patch to
9.4 (all supported versions).
Reviewed by Daniel Gustafsson and Kyotaro HORIGUCHI.
Discussion: https://postgr.es/m/20180812064815.GB2301738@rfd.leadboat.com
Previously we were using the SQL:2003 definition, which doesn't allow
this, but that creates a serious dump/restore gotcha: there is no
setting of xmloption that will allow all valid XML data. Hence,
switch to the 2006 definition.
Since libxml doesn't accept <!DOCTYPE> directives in the mode we
use for CONTENT parsing, the implementation is to detect <!DOCTYPE>
in the input and switch to DOCUMENT parsing mode. This should not
cost much, because <!DOCTYPE> should be close to the front of the
input if it's there at all. It's possible that this causes the
error messages for malformed input to be slightly different than
they were before, if said input includes <!DOCTYPE>; but that does
not seem like a big problem.
In passing, buy back a few cycles in parsing of large XML documents
by not doing strlen() of the whole input in parse_xml_decl().
Back-patch because dump/restore failures are not nice. This change
shouldn't break any cases that worked before, so it seems safe to
back-patch.
Chapman Flack (revised a bit by me)
Discussion: https://postgr.es/m/CAN-V+g-6JqUQEQZ55Q3toXEN6d5Ez5uvzL4VR+8KtvJKj31taw@mail.gmail.com
None of the code that uses GUC values is really prepared for them to
hold NaN, but parse_real() didn't have any defense against accepting
such a value. Treat it the same as a syntax error.
I haven't attempted to analyze the exact consequences of setting any
of the float GUCs to NaN, but since they're quite unlikely to be good,
this seems like a back-patchable bug fix.
Note: we don't need an explicit test for +-Infinity because those will
be rejected by existing range checks. I added a regression test for
that in HEAD, but not older branches because the spelling of the value
in the error message will be platform-dependent in branches where we
don't always use port/snprintf.c.
Discussion: https://postgr.es/m/1798.1552165479@sss.pgh.pa.us
Reflecting an updated parameter value requires a server restart, which
was not mentioned in the documentation and in postgresql.conf.sample.
Reported-by: Thomas Poty
Discussion: https://postgr.es/m/15659-0cd812f13027a2d8@postgresql.org
While this isn't really supposed to happen, it can occur in OOM
situations and perhaps others. Instead of crashing, substitute
"(no message provided)".
I didn't worry about localizing this text, since we aren't
localizing anything else here; besides, if we're on the edge of
OOM, it's unlikely gettext() would work.
Report and fix by Sergio Conde Gómez in bug #15624.
Discussion: https://postgr.es/m/15624-4dea54091a2864e6@postgresql.org
The bug was that determining which columns are part of the replica
identity index using RelationGetIndexAttrBitmap() would run
eval_const_expressions() on index expressions and predicates across
all indexes of the table, which in turn might require a snapshot, but
there wasn't one set, so it crashes. There were actually two separate
bugs, one on the publisher and one on the subscriber.
To trigger the bug, a table that is part of a publication or
subscription needs to have an index with a predicate or expression
that lends itself to constant expressions simplification.
The fix is to avoid the constant expressions simplification in
RelationGetIndexAttrBitmap(), so that it becomes safe to call in these
contexts. The constant expressions simplification comes from the
calls to RelationGetIndexExpressions()/RelationGetIndexPredicate() via
BuildIndexInfo(). But RelationGetIndexAttrBitmap() calling
BuildIndexInfo() is overkill. The latter just takes pg_index catalog
information, packs it into the IndexInfo structure, which former then
just unpacks again and throws away. We can just do this directly with
less overhead and skip the troublesome calls to
eval_const_expressions(). This also removes the awkward
cross-dependency between relcache.c and index.c.
Bug: #15114
Reported-by: Петър Славов <pet.slavov@gmail.com>
Reviewed-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/flat/152110589574.1223.17983600132321618383@wrigleys.postgresql.org/
ftoi4 and its sibling coercion functions did their overflow checks in
a way that looked superficially plausible, but actually depended on an
assumption that the MIN and MAX comparison constants can be represented
exactly in the float4 or float8 domain. That fails in ftoi4, ftoi8,
and dtoi8, resulting in a possibility that values near the MAX limit will
be wrongly converted (to negative values) when they need to be rejected.
Also, because we compared before rounding off the fractional part,
the other three functions threw errors for values that really ought
to get rounded to the min or max integer value.
Fix by doing rint() first (requiring an assumption that it handles
NaN and Inf correctly; but dtoi8 and ftoi8 were assuming that already),
and by comparing to values that should coerce to float exactly, namely
INTxx_MIN and -INTxx_MIN. Also remove some random cosmetic discrepancies
between these six functions.
This back-patches commits cbdb8b4c0 and 452b637d4. In the 9.4 branch,
also back-patch the portion of 62e2a8dc2 that added PG_INTnn_MIN and
related constants to c.h, so that these functions can rely on them.
Per bug #15519 from Victor Petrovykh.
Patch by me; thanks to Andrew Gierth for analysis and discussion.
Discussion: https://postgr.es/m/15519-4fc785b483201ff1@postgresql.org
On some operating systems, it doesn't make sense to retry fsync(),
because dirty data cached by the kernel may have been dropped on
write-back failure. In that case the only remaining copy of the
data is in the WAL. A subsequent fsync() could appear to succeed,
but not have flushed the data. That means that a future checkpoint
could apparently complete successfully but have lost data.
Therefore, violently prevent any future checkpoint attempts by
panicking on the first fsync() failure. Note that we already
did the same for WAL data; this change extends that behavior to
non-temporary data files.
Provide a GUC data_sync_retry to control this new behavior, for
users of operating systems that don't eject dirty data, and possibly
forensic/testing uses. If it is set to on and the write-back error
was transient, a later checkpoint might genuinely succeed (on a
system that does not throw away buffers on failure); if the error is
permanent, later checkpoints will continue to fail. The GUC defaults
to off, meaning that we panic.
Back-patch to all supported releases.
There is still a narrow window for error-loss on some operating
systems: if the file is closed and later reopened and a write-back
error occurs in the intervening time, but the inode has the bad
luck to be evicted due to memory pressure before we reopen, we could
miss the error. A later patch will address that with a scheme
for keeping files with dirty data open at all times, but we judge
that to be too complicated to back-patch.
Author: Craig Ringer, with some adjustments by Thomas Munro
Reported-by: Craig Ringer
Reviewed-by: Robert Haas, Thomas Munro, Andres Freund
Discussion: https://postgr.es/m/20180427222842.in2e4mibx45zdth5%40alap3.anarazel.de
Previously it was possible to set client_min_messages to FATAL or PANIC,
which had the effect of suppressing transmission of regular ERROR messages
to the client. Perhaps that seemed like a useful option in the past, but
the trouble with it is that it breaks guarantees that are explicitly made
in our FE/BE protocol spec about how a query cycle can end. While libpq
and psql manage to cope with the omission, that's mostly because they
are not very bright; client libraries that have more semantic knowledge
are likely to get confused. Notably, pgODBC doesn't behave very sanely.
Let's fix this by getting rid of the ability to set client_min_messages
above ERROR.
In HEAD, just remove the FATAL and PANIC options from the set of allowed
enum values for client_min_messages. (This change also affects
trace_recovery_messages, but that's OK since these aren't useful values
for that variable either.)
In the back branches, there was concern that rejecting these values might
break applications that are explicitly setting things that way. I'm
pretty skeptical of that argument, but accommodate it by accepting these
values and then internally setting the variable to ERROR anyway.
In all branches, this allows a couple of tiny simplifications in the
logic in elog.c, so do that.
Also respond to the point that was made that client_min_messages has
exactly nothing to do with the server's logging behavior, and therefore
does not belong in the "When To Log" subsection of the documentation.
The "Statement Behavior" subsection is a better match, so move it there.
Jonah Harris and Tom Lane
Discussion: https://postgr.es/m/7809.1541521180@sss.pgh.pa.us
Discussion: https://postgr.es/m/15479-ef0f4cc2fd995ca2@postgresql.org