The NetBSD shell apparently returns non-zero from an unset command if
the variable is already unset. This matters when, as in pg_upgrade's
test.sh, we are working under 'set -e'. To protect against this, we
first set the PG variables to an empty string before unsetting them
completely.
Error found on buildfarm member coypu, solution from Rémi Zara.
These modules used the YYPARSE_PARAM macro, which has been deprecated
by the bison folk since 1.875, and which they finally removed in 3.0.
Adjust the code to use the replacement facility, %parse-param, which
is a much better solution anyway since it allows specification of the
type of the extra parser parameter. We can thus get rid of a lot of
unsightly casting.
Back-patch to all active branches, since somebody might try to build
a back branch with up-to-date tools.
This helps guard against changes in the set of reserved keywords from
one version to another. In theory it should only be an issue if we
de-reserve a keyword in a newer release, since that can create the type
of problem shown in bug #8128.
Back-patch to 9.1 where the --quote-all-identifiers option was added.
Now that pg_dump no longer dumps invalid indexes, per commit
683abc73dff549e94555d4020dae8d02f32ed78b, have pg_upgrade also skip
them. Previously pg_upgrade threw an error if invalid indexes existed.
Backpatch to 9.2, 9.1, and 9.0 (where pg_upgrade was added to git)
Windows sometimes gets upset if we rename a large directory and then try
to use the old name quickly, as seen in occasional buildfarm failures.
So we avoid that by building the old version in the intended
destination in the first place instead of renaming it, similar to the
change made for the same reason in commit b7f8465c.
If the remote database's settings of these GUCs are different from ours,
ambiguous datetime values may be read incorrectly. To fix, temporarily
adopt the remote server's settings while we ingest a query result.
This is not a complete fix, since it doesn't do anything about ambiguous
values in commands sent to the remote server; but there seems little we
can do about that end of it given dblink's entirely textual API for
transmitted commands.
Back-patch to 9.2. The hazard exists in all versions, but this patch
would need more work to apply before 9.2. Given the lack of field
complaints about this issue, it doesn't seem worth the effort at present.
Daniel Farina and Tom Lane
Cases such as similarity('', '') produced a NaN result due to computing
0/0. Per discussion, make it return zero instead.
This appears to be the basic cause of bug #7867 from Michele Baravalle,
although it remains unclear why her installation doesn't think Cyrillic
letters are letters.
Back-patch to all active branches.
The previous coding supposed that the first differing bytes in two varlena
datums must have the same sign difference as their overall comparison
result. This is obviously bogus for text strings in non-C locales, and
probably wrong for numeric, and even for bytea I think it was wrong on
machines where char is signed. When the assumption failed, the function
could deliver a zero or negative penalty in situations where such a result
is quite ridiculous, leading the core GiST code to make very bad page-split
decisions.
To fix, take the absolute values of the byte-level differences. Also,
switch the code to using unsigned char not just char, so that the behavior
will be consistent whether char is signed or not.
Per investigation of a trouble report from Tomas Vondra. Back-patch to all
supported branches.
gbt_var_bin_union() failed to do the right thing when the existing range
needed to be widened at both ends rather than just one end. This could
result in an invalid index in which keys that are present would not be
found by searches, because the searches would not think they need to
descend to the relevant leaf pages. This error affected all the varlena
datatypes supported by btree_gist (text, bytea, bit, numeric).
Per investigation of a trouble report from Tomas Vondra. (There is also
an issue in gbt_var_penalty(), but that should only result in inefficiency
not wrong answers. I'm committing this separately so that we have a git
state in which it can be tested that bad penalty results don't produce
invalid indexes.) Back-patch to all supported branches.
This is now used by ecpg tests, and not clobbered by pg_upgrade
tests. This change won't affect anything that doesn't set this
environment variable, but will enable the buildfarm to control
exactly what port regression test installs will be running on,
and thus to detect possible rogue postmasters more easily.
Backpatch to release 9.2 where EXTRA_REGRESS_OPTS was first used.
On non-Windows machines, we use the Unix socket for connections to test
postmasters, so there is no need to create a TCP socket. Furthermore,
doing so causes failures due to port conflicts if two builds are carried
out concurrently on one machine. (If the builds are done in different
chroots, which is standard practice at least in Red Hat distros, there
is no risk of conflict on the Unix socket.) Suppressing the TCP socket
by setting listen_addresses to empty has long been standard practice
for pg_regress, and pg_upgrade knows about this too ... but pg_upgrade's
test.sh didn't get the memo.
Back-patch to 9.2, and also sync the 9.2 version of the script with HEAD
as much as practical.
All versions of pg_upgrade upgraded invalid indexes caused by CREATE
INDEX CONCURRENTLY failures and marked them as valid. The patch adds a
check to all pg_upgrade versions and throws an error during upgrade or
--check.
Backpatch to 9.2, 9.1, 9.0. Patch slightly adjusted.
Normally each module is tested in a database named contrib_regression,
which is dropped and recreated at the beginhning of each pg_regress run.
This new mode, enabled by adding USE_MODULE_DB=1 to the make command
line, runs most modules in a database with the module name embedded in
it.
This will make testing pg_upgrade on clusters with the contrib modules
a lot easier.
Second attempt at this, this time accomodating make versions older
than 3.82.
Still to be done: adapt to the MSVC build system.
Backpatch to 9.0, which is the earliest version it is reasonably
possible to test upgrading from.
Normally each module is tested in aq database named contrib_regression,
which is dropped and recreated at the beginhning of each pg_regress run.
This mode, enabled by adding USE_MODULE_DB=1 to the make command line,
runs most modules in a database with the module name embedded in it.
This will make testing pg_upgrade on clusters with the contrib modules
a lot easier.
Still to be done: adapt to the MSVC build system.
Backpatch to 9.0, which is the earliest version it is reasonably
possible to test upgrading from.
Commit 8cb53654dbdb4c386369eb988062d0bbb6de725e, which introduced DROP
INDEX CONCURRENTLY, managed to break CREATE INDEX CONCURRENTLY via a poor
choice of catalog state representation. The pg_index state for an index
that's reached the final pre-drop stage was the same as the state for an
index just created by CREATE INDEX CONCURRENTLY. This meant that the
(necessary) change to make RelationGetIndexList ignore about-to-die indexes
also made it ignore freshly-created indexes; which is catastrophic because
the latter do need to be considered in HOT-safety decisions. Failure to
do so leads to incorrect index entries and subsequently wrong results from
queries depending on the concurrently-created index.
To fix, make the final state be indisvalid = true and indisready = false,
which is otherwise nonsensical. This is pretty ugly but we can't add
another column without forcing initdb, and it's too late for that in 9.2.
(There's a cleaner fix in HEAD.)
In addition, change CREATE/DROP INDEX CONCURRENTLY so that the pg_index
flag changes they make without exclusive lock on the index are made via
heap_inplace_update() rather than a normal transactional update. The
latter is not very safe because moving the pg_index tuple could result in
concurrent SnapshotNow scans finding it twice or not at all, thus possibly
resulting in index corruption. This is a pre-existing bug in CREATE INDEX
CONCURRENTLY, which was copied into the DROP code.
In addition, fix various places in the code that ought to check to make
sure that the indexes they are manipulating are valid and/or ready as
appropriate. These represent bugs that have existed since 8.2, since
a failed CREATE INDEX CONCURRENTLY could leave a corrupt or invalid
index behind, and we ought not try to do anything that might fail with
such an index.
Also fix RelationReloadIndexInfo to ensure it copies all the pg_index
columns that are allowed to change after initial creation. Previously we
could have been left with stale values of some fields in an index relcache
entry. It's not clear whether this actually had any user-visible
consequences, but it's at least a bug waiting to happen.
In addition, do some code and docs review for DROP INDEX CONCURRENTLY;
some cosmetic code cleanup but mostly addition and revision of comments.
Portions of this need to be back-patched even further, but I'll work
on that separately.
Problem reported by Amit Kapila, diagnosis by Pavan Deolasee,
fix by Tom Lane and Andres Freund.
The previous definitions of these GUC variables allowed them to range
up to INT_MAX, but in point of fact the underlying code would suffer
overflows or other errors with large values. Reduce the maximum values
to something that won't misbehave. There's no apparent value in working
harder than this, since very large delays aren't sensible for any of
these. (Note: the risk with archive_timeout is that if we're late
checking the state, the timestamp difference it's being compared to
might overflow. So we need some amount of slop; the choice of INT_MAX/2
is arbitrary.)
Per followup investigation of bug #7670. Although this isn't a very
significant fix, might as well back-patch.
On some platforms these functions return NULL, rather than the more common
practice of returning a pointer to a zero-sized block of memory. Hack our
various wrapper functions to hide the difference by substituting a size
request of 1. This is probably not so important for the callers, who
should never touch the block anyway if they asked for size 0 --- but it's
important for the wrapper functions themselves, which mistakenly treated
the NULL result as an out-of-memory failure. This broke at least pg_dump
for the case of no user-defined aggregates, as per report from
Matthew Carrington.
Back-patch to 9.2 to fix the pg_dump issue. Given the lack of previous
complaints, it seems likely that there is no live bug in previous releases,
even though some of these functions were in place before that.
entries are not dumped. This fixes an error caused by
droping/recreating the information_schema, but other failures were also
possible.
Backpatch to 9.2.
If we call pg_ctl stop, the server might continue and thus
hold a log file for a short time after it has deleted its pid file,
(which is when pg_ctl will exit), and so a subsequent attempt to
open the log file might fail.
We therefore try to open it a few times, sleeping one second between
tries, to give the server time to exit.
This corrects an error that was observed on the buildfarm.
Backpatched to 9.2,
Call pg_dumpall using -f switch instead of redirection, to avoid
writing the output in text mode and generating spurious carriage
returns. Remove to carriage return ignoring hack introduced by
commit e442b0f0c6fd26738bafdeb5222511b586dfe4b9.
Backpatch to 9.2.
pg_upgrade opened the output from pg_dumpall in text mode and
wrote the split files in text mode. This caused unwanted eating
of intended carriage returns on input and production of spurious
carriage returns on output. To avoid this, open all these files
in binary mode. On non-Windows platforms, this change has no
effect.
Backpatch to 9.0. On 9.0 and 9.1, we also switch from redirecting
pg_dumpall's output to using pg_dumpall's -f switch, for the same
reason.
socket location. Also, prevent putting the socket in the current
directory for pre-9.1 servers in live check and non-live check mode,
because pre-9.1 pg_ctl -w can't handle it.
Backpatch to 9.2.
pg_upgrade produces a platform-specific script to remove the old
directory, but on Windows it has not been making sure that the
paths it writes as arguments for rmdir and del use the backslash
path separator, which will cause these scripts to fail.
The fix is backpatched to Release 9.0.
This syncs contrib/pg_upgrade in the 9.2 branch with HEAD, except for the
HEAD changes related to converting XLogRecPtr to 64-bit int. It includes
back-patching these commits:
666d494d19dbd5dc7a177709a2f7069913f8ab89
pg_upgrade: abstract out copying of files from old cluster to new
7afa8bed65ea925208f128048f3a528a64e1319a
pg_upgrade: Run the created scripts in the test suite
ab577e63faf792593ca728625a8ef0b1dfaf7500
Remove analyze_new_cluster.sh on make clean, too
34c02044ed7e7defde5a853b26dcd806c872d974
Fix thinko in comment
088c065ce8e405fafbfa966937184ece9defcf20
pg_upgrade: Fix exec_prog API to be less flaky
f763b77193b04eba03a1f4ce46df34dc0348419e
Fix pg_upgrade to cope with non-default unix_socket_directory scenarios.
Extraction of trigrams did not process LIKE escape sequences properly,
leading to possible misidentification of trigrams near escapes, resulting
in incorrect index search results.
Fujii Masao
libxslt offers the ability to read and write both files and URLs through
stylesheet commands, thus allowing unprivileged database users to both read
and write data with the privileges of the database server. Disable that
through proper use of libxslt's security options.
Also, remove xslt_process()'s ability to fetch documents and stylesheets
from external files/URLs. While this was a documented "feature", it was
long regarded as a terrible idea. The fix for CVE-2012-3489 broke that
capability, and rather than expend effort on trying to fix it, we're just
going to summarily remove it.
While the ability to write as well as read makes this security hole
considerably worse than CVE-2012-3489, the problem is mitigated by the fact
that xslt_process() is not available unless contrib/xml2 is installed,
and the longstanding warnings about security risks from that should have
discouraged prudent DBAs from installing it in security-exposed databases.
Reported and fixed by Peter Eisentraut.
Security: CVE-2012-3488
4741e9afb93f0d769655b2d18c2b73b86f281010. This was done by adding an
optional second log file parameter to exec_prog(), and closing and
reopening the log file between system() calls.
Backpatch to 9.2.
After taking awhile to digest the row-processor feature that was added to
libpq in commit 92785dac2ee7026948962cd61c4cd84a2d052772, we've concluded
it is over-complicated and too hard to use. Leave the core infrastructure
changes in place (that is, there's still a row processor function inside
libpq), but remove the exposed API pieces, and instead provide a "single
row" mode switch that causes PQgetResult to return one row at a time in
separate PGresult objects.
This approach incurs more overhead than proper use of a row processor
callback would, since construction of a PGresult per row adds extra cycles.
However, it is far easier to use and harder to break. The single-row mode
still affords applications the primary benefit that the row processor API
was meant to provide, namely not having to accumulate large result sets in
memory before processing them. Preliminary testing suggests that we can
probably buy back most of the extra cycles by micro-optimizing construction
of the extra results, but that task will be left for another day.
Marko Kreen