The tar output module did some very ugly and ultimately incorrect hacking
on COPY commands to try to get them to work in the context of restoring a
deconstructed tar archive. In particular, it would fail altogether for
table names containing any upper-case characters, since it smashed the
command string to lower-case before modifying it (and, just to add insult
to injury, did that in a way that would fail in multibyte encodings).
I don't see any particular value in being flexible about the case of the
command keywords, since the string will just have been created by
dumpTableData, so let's get rid of the whole case-folding thing.
Also, it doesn't seem to meet the POLA for the script to restore data only
in COPY mode, so add \i commands to make it have comparable behavior in
--inserts mode.
Noted while looking at the tar-output code in connection with Brian
Weaver's patch.
Back-patch portions of commit 05b555d12b.
There doesn't seem to be any reason not to fix pg_basebackup fully, but
we can't change pg_dump's "magic" string without breaking older versions
of pg_restore. Instead, just patch pg_restore to accept either version
of the magic string, in hopes of avoiding compatibility problems when
9.3 comes out. I also fixed pg_dump to write the correct 2-block EOF
marker, since that won't create a compatibility problem with pg_restore
and it could help with some versions of tar.
Brian Weaver and Tom Lane
The syntax "su -c 'command' username" is not accepted by all versions of
su, for example not OpenBSD's. More portable is "su username -c
'command'". So change runtime.sgml to recommend that syntax. Also,
add a -D switch to the OpenBSD example script, for consistency with other
examples. Per Denis Lapshin and Gábor Hidvégi.
Somewhere along the line, somebody decided to remove all trace of this
notation from the documentation text. It was still in the command syntax
synopses, or at least some of them, but with no indication what it meant.
This will not do, as evidenced by the confusion apparent in bug #7543;
even if the notation is now unnecessary, people will find it in legacy
SQL code and need to know what it does.
Recovery code documents clearly that a shutdown checkpoint is executed at
end of recovery - a shutdown checkpoint WAL record is written but the buffer
manager had been altered to treat end of recovery as a normal checkpoint.
This bug exacerbates the bufmgr relpersistence bug.
Bug spotted by Andres Freund, patch by me.
The documentation mentioned setting autovacuum_freeze_max_age to
"its maximum allowed value of a little less than two billion".
This led to a post asking about the exact maximum allowed value,
which is precisely two billion, not "a little less".
Based on question by Radovan Jablonovsky. Backpatch to 8.3.
Back-patch commits 9afc648111 and
b8fbbcf37f. The first of these is really
a minor code cleanup to save a few cycles, but it turns out to provide
a workaround for the misoptimization problem described in bug #7516.
The second commit adds a regression test case.
Back-patch the fix to all active branches. The test case only works
as far back as 9.0, because it relies on plpgsql which isn't installed
by default before that. (I didn't have success modifying it into an
all-plperl form that still provoked a crash, though this may just reflect
my lack of Perl-fu.)
I made multiple errors in commit 97532f7c29,
stemming mostly from failure to think about the available frequency data
as being element frequencies not value frequencies (so that occurrences of
different elements are not mutually exclusive). This led to sillinesses
such as estimating that "word" would match more rows than "word:*".
The choice to clamp to a minimum estimate of DEFAULT_TS_MATCH_SEL also
seems pretty ill-considered in hindsight, as it would frequently result in
an estimate much larger than the available data suggests. We do need some
sort of clamp, since a pattern not matching any of the MCELEMs probably
still needs a selectivity estimate of more than zero. I chose instead to
clamp to at least what a non-MCELEM word would be estimated as, preserving
the property that "word:*" doesn't get an estimate less than plain "word",
whether or not the word appears in MCELEM.
Per investigation of a gripe from Bill Martin, though I suspect that his
example case actually isn't even reaching the erroneous code.
Back-patch to 9.1 where this code was introduced.
validate_plperl_function() supposed that it could free an old
plperl_proc_desc struct immediately upon detecting that it was stale.
However, if a plperl function is called recursively, this could result
in deleting the struct out from under an outer invocation, leading to
misbehavior or crashes. Add a simple reference-count mechanism to
ensure that such structs are freed only when the last reference goes
away.
Per investigation of bug #7516 from Marko Tiikkaja. I am not certain
that this error explains his report, because he says he didn't have
any recursive calls --- but it's hard to see how else it could have
crashed right there. In any case, this definitely fixes some problems
in the area.
Back-patch to all active branches.
Investigation shows that some intermittent build failures in ecpg are the
result of a gmake bug that was reported quite some time ago:
http://savannah.gnu.org/bugs/?30653
Preventing parallel builds of the ecpg subdirectories seems to dodge the
bug. Per yesterday's pgsql-hackers discussion, there are some other things
in the subdirectory makefiles that seem rather unsafe for parallel builds
too, but there's little point in fixing them as long as we have to work
around a make bug.
Back-patch to 9.1; parallel builds weren't very well supported before
that anyway.
The planner previously assumed that parameter Vars having the same absolute
query level, varno, and varattno could safely be assigned the same runtime
PARAM_EXEC slot, even though they might be different Vars appearing in
different subqueries. This was (probably) safe before the introduction of
CTEs, but the lazy-evalution mechanism used for CTEs means that a CTE can
be executed during execution of some other subquery, causing the lifespan
of Params at the same syntactic nesting level as the CTE to overlap with
use of the same slots inside the CTE. In 9.1 we created additional hazards
by using the same parameter-assignment technology for nestloop inner scan
parameters, but it was broken before that, as illustrated by the added
regression test.
To fix, restructure the planner's management of PlannerParamItems so that
items having different semantic lifespans are kept rigorously separated.
This will probably result in complex queries using more runtime PARAM_EXEC
slots than before, but the slots are cheap enough that this hardly matters.
Also, stop generating PlannerParamItems containing Params for subquery
outputs: all we really need to do is reserve the PARAM_EXEC slot number,
and that now only takes incrementing a counter. The planning code is
simpler and probably faster than before, as well as being more correct.
Per report from Vik Reykja.
Back-patch of commit 46c508fbcf into all
branches that support WITH.
Give the correct name of the GUC parameter being complained of.
Also, emit a more suitable SQLSTATE (INVALID_PARAMETER_VALUE,
not the default INTERNAL_ERROR).
Gurjeet Singh, errcode adjustment by me
pg_upgrade opened the output from pg_dumpall in text mode and
wrote the split files in text mode. This caused unwanted eating
of intended carriage returns on input and production of spurious
carriage returns on output. To avoid this, open all these files
in binary mode. On non-Windows platforms, this change has no
effect.
Backpatch to 9.0. On 9.0 and 9.1, we also switch from redirecting
pg_dumpall's output to using pg_dumpall's -f switch, for the same
reason.
Perl, for some unaccountable reason, believes it's a good idea to reset
SIGFPE handling to SIG_IGN. Which wouldn't be a good idea even if it
worked; but on some platforms (Linux at least) it doesn't work at all,
instead resulting in forced process termination if the signal occurs.
Given the lack of other complaints, it seems safe to assume that Perl
never actually provokes SIGFPE and so there is no value in the setting
anyway. Hence, reset it to our normal handler after initializing Perl.
Report, analysis and patch by Andres Freund.
pg_upgrade produces a platform-specific script to remove the old
directory, but on Windows it has not been making sure that the
paths it writes as arguments for rmdir and del use the backslash
path separator, which will cause these scripts to fail.
The fix is backpatched to Release 9.0.
We previously supposed that any given platform would supply both or neither
of these functions, so that one configure test would be sufficient. It now
appears that at least on AIX this is not the case ... which is likely an
AIX bug, but nonetheless we need to cope with it. So use separate tests.
Per bug #6758; thanks to Andrew Hastie for doing the followup testing
needed to confirm what was happening.
Backpatch to 9.1, where we began using these functions.
This back-ports commits c8ba697a4b and
e5db11c558, which fix one definite and one
speculative bug in gistchoose, and make the code a lot more intelligible as
well. In 9.2 only, this also affects the largely-copied-and-pasted logic
in gistRelocateBuildBuffersOnSplit.
The impact of the bugs was that the functions might make poor decisions
as to which index tree branch to push a new entry down into, resulting in
GiST index bloat and poor performance. The fixes rectify these decisions
for future insertions, but a REINDEX would be needed to clean up any
existing index bloat.
Alexander Korotkov, Robert Haas, Tom Lane
The existing documentation in Linux Memory Overcommit seemed to
assume that PostgreSQL itself could never be the problem, or at
least it didn't tell you what to do about it.
Per discussion with Craig Ringer and Kevin Grittner.
This is a backport of commit b85427f227.
Per discussion of bug #4958. Some of these fixes probably need to be
back-patched further, but I'm just doing this much for now.
The GUC check hooks for transaction_read_only and transaction_isolation
tried to check RecoveryInProgress(), so as to disallow setting read/write
mode or serializable isolation level (respectively) in hot standby
sessions. However, GUC check hooks can be called in many situations where
we're not connected to shared memory at all, resulting in a crash in
RecoveryInProgress(). Among other cases, this results in EXEC_BACKEND
builds crashing during child process start if default_transaction_isolation
is serializable, as reported by Heikki Linnakangas. Protect those calls
by silently allowing any setting when not inside a transaction; which is
okay anyway since these GUCs are always reset at start of transaction.
Also, add a check to GetSerializableTransactionSnapshot() to complain
if we are in hot standby. We need that check despite the one in
check_XactIsoLevel() because default_transaction_isolation could be
serializable. We don't want to complain any sooner than this in such
cases, since that would prevent running transactions at all in such a
state; but a transaction can be run, if SET TRANSACTION ISOLATION is done
before setting a snapshot. Per report some months ago from Robert Haas.
Back-patch to 9.1, since these problems were introduced by the SSI patch.
Kevin Grittner and Tom Lane, with ideas from Heikki Linnakangas
If we revoke a grant option from some role X, but X still holds the option
via another grant, we should not recursively revoke the privilege from
role(s) Y that X had granted it to. This was supposedly fixed as one
aspect of commit 4b2dafcc0b, but I must not
have tested it, because in fact that code never worked: it forgot to shift
the grant-option bits back over when masking the bits being revoked.
Per bug #6728 from Daniel German. Back-patch to all active branches,
since this has been wrong since 8.0.
Extraction of trigrams did not process LIKE escape sequences properly,
leading to possible misidentification of trigrams near escapes, resulting
in incorrect index search results.
Fujii Masao
The previous coding essentially assumed that nodes would be rescanned in
the same order they were initialized in; or at least that the "leader" of
a group of CTEscans would be rescanned before any others were required to
execute. Unfortunately, that isn't even a little bit true. It's possible
to devise queries in which the leader isn't rescanned until other CTEscans
on the same CTE have run to completion, or even in which the leader never
gets a rescan call at all.
The fix makes the leader specially responsible only for initial creation
and final destruction of the tuplestore; rescan resets are now a
symmetrically shared responsibility. This means that we might reset the
tuplestore multiple times when restarting a plan subtree containing
multiple CTEscans; but resetting an already-empty tuplestore is cheap
enough that that doesn't seem like a problem.
Per report from Adam Mackler; the new regression test cases are based on
his example query.
Back-patch to 8.4 where CTE scans were introduced.
This situation creates a dependency loop that confuses pg_dump and probably
other things. Moreover, since the mental model is that the extension
"contains" schemas it owns, but "is contained in" its extschema (even
though neither is strictly true), having both true at once is confusing for
people too. So prevent the situation from being set up.
Reported and patched by Thom Brown. Back-patch to 9.1 where extensions
were added.
libxslt offers the ability to read and write both files and URLs through
stylesheet commands, thus allowing unprivileged database users to both read
and write data with the privileges of the database server. Disable that
through proper use of libxslt's security options.
Also, remove xslt_process()'s ability to fetch documents and stylesheets
from external files/URLs. While this was a documented "feature", it was
long regarded as a terrible idea. The fix for CVE-2012-3489 broke that
capability, and rather than expend effort on trying to fix it, we're just
going to summarily remove it.
While the ability to write as well as read makes this security hole
considerably worse than CVE-2012-3489, the problem is mitigated by the fact
that xslt_process() is not available unless contrib/xml2 is installed,
and the longstanding warnings about security risks from that should have
discouraged prudent DBAs from installing it in security-exposed databases.
Reported and fixed by Peter Eisentraut.
Security: CVE-2012-3488
xml_parse() would attempt to fetch external files or URLs as needed to
resolve DTD and entity references in an XML value, thus allowing
unprivileged database users to attempt to fetch data with the privileges
of the database server. While the external data wouldn't get returned
directly to the user, portions of it could be exposed in error messages
if the data didn't parse as valid XML; and in any case the mere ability
to check existence of a file might be useful to an attacker.
The ideal solution to this would still allow fetching of references that
are listed in the host system's XML catalogs, so that documents can be
validated according to installed DTDs. However, doing that with the
available libxml2 APIs appears complex and error-prone, so we're not going
to risk it in a security patch that necessarily hasn't gotten wide review.
So this patch merely shuts off all access, causing any external fetch to
silently expand to an empty string. A future patch may improve this.
In HEAD and 9.2, also suppress warnings about undefined entities, which
would otherwise occur as a result of not loading referenced DTDs. Previous
branches don't show such warnings anyway, due to different error handling
arrangements.
Credit to Noah Misch for first reporting the problem, and for much work
towards a solution, though this simplistic approach was not his preference.
Also thanks to Daniel Veillard for consultation.
Security: CVE-2012-3489
DST law changes in Morocco; Tokelau has relocated to the other side of
the International Date Line; and apparently Olson had Tokelau's GMT
offset wrong by an hour even before that.
There are also a large number of non-significant changes in this update.
Upstream took the opportunity to remove trailing whitespace, and the
SCCS-style version numbers on the individual files are gone too.
This command generated new pg_depend entries linking the index to the
constraint and the constraint to the table, which match the entries made
when a unique or primary key constraint is built de novo. However, it did
not bother to get rid of the entries linking the index directly to the
table. We had considered the issue when the ADD CONSTRAINT USING INDEX
patch was written, and concluded that we didn't need to get rid of the
extra entries. But this is wrong: ALTER COLUMN TYPE wasn't expecting such
redundant dependencies to exist, as reported by Hubert Depesz Lubaczewski.
On reflection it seems rather likely to break other things as well, since
there are many bits of code that crawl pg_depend for one purpose or
another, and most of them are pretty naive about what relationships they're
expecting to find. Fortunately it's not that hard to get rid of the extra
dependency entries, so let's do that.
Back-patch to 9.1, where ALTER TABLE ADD CONSTRAINT USING INDEX was added.
Should be limited to the maximum number of connections excluding
autovacuum workers, not including.
Add similar check for max_wal_senders, which should never be higher than
max_connections.
We used to convert the unicode object directly to a string in the server
encoding by calling Python's PyUnicode_AsEncodedString function. In other
words, we used Python's routines to do the encoding. However, that has a
few problems. First of all, it required keeping a mapping table of Python
encoding names and PostgreSQL encodings. But the real killer was that Python
doesn't support EUC_TW and MULE_INTERNAL encodings at all.
Instead, convert the Python unicode object to UTF-8, and use PostgreSQL's
encoding conversion functions to convert from UTF-8 to server encoding. We
were already doing the same in the other direction in PLyUnicode_FromString,
so this is more consistent, too.
Note: This makes SQL_ASCII to behave more leniently. We used to map
SQL_ASCII to Python's 'ascii', which on Python means strict 7-bit ASCII
only, so you got an error if the python string contained anything but pure
ASCII. You no longer get an error; you get the UTF-8 representation of the
string instead.
Backpatch to 9.0, where these conversions were introduced.
Jan Urbański
DecodeInterval() failed to honor the "range" parameter (the special SQL
syntax for indicating which fields appear in the literal string) if the
time was signed. This seems inappropriate, so make it work like the
not-signed case. The inconsistency was introduced in my commit
f867339c01, which as noted in its log message
was only really focused on making SQL-compliant literals work per spec.
Including a sign here is not per spec, but if we're going to allow it
then it's reasonable to expect it to work like the not-signed case.
Also, remove bogus setting of tmask, which caused subsequent processing to
think that what had been given was a timezone and not an hh:mm(:ss) field,
thus confusing checks for redundant fields. This seems to be an aboriginal
mistake in Lockhart's commit 2cf1642461.
Add regression test cases to illustrate the changed behaviors.
Back-patch as far as 8.4, where support for spec-compliant interval
literals was added.
Range problem reported and diagnosed by Amit Kapila, tmask problem by me.