1
0
mirror of https://github.com/postgres/postgres.git synced 2025-07-27 12:41:57 +03:00
Commit Graph

31429 Commits

Author SHA1 Message Date
62f84f1de2 Report success when Windows kill() emulation signals an exiting process.
This is consistent with the POSIX verdict that kill() shall not report
ESRCH for a zombie process.  Back-patch to 9.0 (all supported versions).
Test code from commit d7cdf6ee36 depends
on it, and log messages about kill() reporting "Invalid argument" will
cease to appear for this not-unexpected condition.
2014-07-23 00:36:52 -04:00
d5e70c73bf MSVC: Substitute $(top_builddir) in REGRESS_OPTS.
Commit d7cdf6ee36 introduced a usage
thereof.  Back-patch to 9.0, like that commit.
2014-07-23 00:36:34 -04:00
9dc2a3fd0a Check block number against the correct fork in get_raw_page().
get_raw_page tried to validate the supplied block number against
RelationGetNumberOfBlocks(), which of course is only right when
accessing the main fork.  In most cases, the main fork is longer
than the others, so that the check was too weak (allowing a
lower-level error to be reported, but no real harm to be done).
However, very small tables could have an FSM larger than their heap,
in which case the mistake prevented access to some FSM pages.
Per report from Torsten Foertsch.

In passing, make the bad-block-number error into an ereport not elog
(since it's certainly not an internal error); and fix sloppily
maintained comment for RelationGetNumberOfBlocksInFork.

This has been wrong since we invented relation forks, so back-patch
to all supported branches.
2014-07-22 11:46:04 -04:00
4c6d0abde9 Diagnose incompatible OpenLDAP versions during build and test.
With OpenLDAP versions 2.4.24 through 2.4.31, inclusive, PostgreSQL
backends can crash at exit.  Raise a warning during "configure" based on
the compile-time OpenLDAP version number, and test the crash scenario in
the dblink test suite.  Back-patch to 9.0 (all supported versions).
2014-07-22 11:02:25 -04:00
6e5a39c9e6 Reject out-of-range numeric timezone specifications.
In commit 631dc390f4, we started to handle
simple numeric timezone offsets via the zic library instead of the old
CTimeZone/HasCTZSet kluge.  However, we overlooked the fact that the zic
code will reject UTC offsets exceeding a week (which seems a bit arbitrary,
but not because it's too tight ...).  This led to possibly setting
session_timezone to NULL, which results in crashes in most timezone-related
operations as of 9.4, and crashes in a small number of places even before
that.  So check for NULL return from pg_tzset_offset() and report an
appropriate error message.  Per bug #11014 from Duncan Gillis.

Back-patch to all supported branches, like the previous patch.
(Unfortunately, as of today that no longer includes 8.4.)
2014-07-21 22:41:36 -04:00
391aa8aac1 Stamp 9.0.18. REL9_0_18 2014-07-21 15:16:01 -04:00
11602754aa Release notes for 9.3.5, 9.2.9, 9.1.14, 9.0.18, 8.4.22. 2014-07-21 14:59:39 -04:00
0cee418222 Translation updates 2014-07-21 00:56:23 -04:00
7659b6913e Update time zone data files to tzdata release 2014e.
DST law changes in Crimea, Egypt, Morocco.  New zone Antarctica/Troll
for Norwegian base in Queen Maud Land.
2014-07-19 15:01:38 -04:00
ec66f1adbf Limit pg_upgrade authentication advice to always-secure techniques.
~/.pgpass is a sound choice everywhere, and "peer" authentication is
safe on every platform it supports.  Cease to recommend "trust"
authentication, the safety of which is deeply configuration-specific.
Back-patch to 9.0, where pg_upgrade was introduced.
2014-07-18 16:06:11 -04:00
b8c24f7ab8 Fix two low-probability memory leaks in regular expression parsing.
If pg_regcomp failed after having invoked markst/cleanst, it would leak any
"struct subre" nodes it had created.  (We've already detected all regex
syntax errors at that point, so the only likely causes of later failure
would be query cancel or out-of-memory.)  To fix, make sure freesrnode
knows the difference between the pre-cleanst and post-cleanst cleanup
procedures.  Add some documentation of this less-than-obvious point.

Also, newlacon did the wrong thing with an out-of-memory failure from
realloc(), so that the previously allocated array would be leaked.

Both of these are pretty low-probability scenarios, but a bug is a bug,
so patch all the way back.

Per bug #10976 from Arthur O'Dwyer.
2014-07-18 13:00:57 -04:00
bf08864b8d Fix REASSIGN OWNED for text search objects
Trying to reassign objects owned by a user that had text search
dictionaries or configurations used to fail with:
ERROR:  unexpected classid 3600
or
ERROR:  unexpected classid 3602

Fix by adding cases for those object types in a switch in pg_shdepend.c.

Both REASSIGN OWNED and text search objects go back all the way to 8.1,
so backpatch to all supported branches.  In 9.3 the alter-owner code was
made generic, so the required change in recent branches is pretty
simple; however, for 9.2 and older ones we need some additional
reshuffling to enable specifying objects by OID rather than name.

Text search templates and parsers are not owned objects, so there's no
change required for them.

Per bug #9749 reported by Michal Novotný
2014-07-15 13:24:07 -04:00
8b300ce87d doc: small fixes for REINDEX reference page
From: Josh Kupershmidt <schmiddy@gmail.com>
2014-07-14 20:41:58 -04:00
2006bf8104 Add autocompletion of locale keywords for CREATE DATABASE
Adds support for autocomplete of LC_COLLATE and LC_CTYPE to
the CREATE DATABASE command in psql.
2014-07-12 14:21:13 +02:00
cd8ba91a05 Fix bug with whole-row references to append subplans.
ExecEvalWholeRowVar incorrectly supposed that it could "bless" the source
TupleTableSlot just once per query.  But if the input is coming from an
Append (or, perhaps, other cases?) more than one slot might be returned
over the query run.  This led to "record type has not been registered"
errors when a composite datum was extracted from a non-blessed slot.

This bug has been there a long time; I guess it escaped notice because when
dealing with subqueries the planner tends to expand whole-row Vars into
RowExprs, which don't have the same problem.  It is possible to trigger
the problem in all active branches, though, as illustrated by the added
regression test.
2014-07-11 19:12:51 -04:00
2865d5952d Don't assume a subquery's output is unique if there's a SRF in its tlist.
While the x output of "select x from t group by x" can be presumed unique,
this does not hold for "select x, generate_series(1,10) from t group by x",
because we may expand the set-returning function after the grouping step.
(Perhaps that should be re-thought; but considering all the other oddities
involved with SRFs in targetlists, it seems unlikely we'll change it.)
Put a check in query_is_distinct_for() so it's not fooled by such cases.

Back-patch to all supported branches.

David Rowley
2014-07-08 14:03:30 -04:00
d0091e3c03 Add some errdetail to checkRuleResultList().
This function wasn't originally thought to be really user-facing,
because converting a table to a view isn't something we expect people
to do manually.  So not all that much effort was spent on the error
messages; in particular, while the code will complain that you got
the column types wrong it won't say exactly what they are.  But since
we repurposed the code to also check compatibility of rule RETURNING
lists, it's definitely user-facing.  It now seems worthwhile to add
errdetail messages showing exactly what the conflict is when there's
a mismatch of column names or types.  This is prompted by bug #10836
from Matthias Raffelsieper, which might have been forestalled if the
error message had reported the wrong column type as being "record".

Per Alvaro's advice, back-patch to branches before 9.4, but resist
the temptation to rephrase any existing strings there.  Adding new
strings is not really a translation degradation; anyway having the
info presented in English is better than not having it at all.
2014-07-02 14:20:47 -04:00
c6b3fb4c53 Fix inadequately-sized output buffer in contrib/unaccent.
The output buffer size in unaccent_lexize() was calculated as input string
length times pg_database_encoding_max_length(), which effectively assumes
that replacement strings aren't more than one character.  While that was
all that we previously documented it to support, the code actually has
always allowed replacement strings of arbitrary length; so if you tried
to make use of longer strings, you were at risk of buffer overrun.  To fix,
use an expansible StringInfo buffer instead of trying to determine the
maximum space needed a-priori.

This would be a security issue if unaccent rules files could be installed
by unprivileged users; but fortunately they can't, so in the back branches
the problem can be labeled as improper configuration by a superuser.
Nonetheless, a memory stomp isn't a nice way of reacting to improper
configuration, so let's back-patch the fix.
2014-07-01 11:23:01 -04:00
d51a6004d0 Remove obsolete example of CSV log file name from log_filename document.
7380b63 changed log_filename so that epoch was not appended to it
when no format specifier is given. But the example of CSV log file name
with epoch still left in log_filename document. This commit removes
such obsolete example.

This commit also documents the defaults of log_directory and
log_filename.

Backpatch to all supported versions.

Christoph Berg
2014-06-26 14:30:11 +09:00
83131e6348 Avoid leaking memory while evaluating arguments for a table function.
ExecMakeTableFunctionResult evaluated the arguments for a function-in-FROM
in the query-lifespan memory context.  This is insignificant in simple
cases where the function relation is scanned only once; but if the function
is in a sub-SELECT or is on the inside of a nested loop, any memory
consumed during argument evaluation can add up quickly.  (The potential for
trouble here had been foreseen long ago, per existing comments; but we'd
not previously seen a complaint from the field about it.)  To fix, create
an additional temporary context just for this purpose.

Per an example from MauMau.  Back-patch to all active branches.
2014-06-19 22:13:58 -04:00
b5c9a3bc10 Make pqsignal() available to pg_regress of ECPG and isolation suites.
Commit 453a5d91d4 made it available to the
src/test/regress build of pg_regress, but all pg_regress builds need the
same treatment.  Patch 9.2 through 8.4; in 9.3 and later, pg_regress
gets pqsignal() via libpgport.
2014-06-14 10:57:36 -04:00
5f09c583c0 Secure Unix-domain sockets of "make check" temporary clusters.
Any OS user able to access the socket can connect as the bootstrap
superuser and proceed to execute arbitrary code as the OS user running
the test.  Protect against that by placing the socket in a temporary,
mode-0700 subdirectory of /tmp.  The pg_regress-based test suites and
the pg_upgrade test suite were vulnerable; the $(prove_check)-based test
suites were already secure.  Back-patch to 8.4 (all supported versions).
The hazard remains wherever the temporary cluster accepts TCP
connections, notably on Windows.

As a convenient side effect, this lets testing proceed smoothly in
builds that override DEFAULT_PGSOCKET_DIR.  Popular non-default values
like /var/run/postgresql are often unwritable to the build user.

Security: CVE-2014-0067
2014-06-14 09:41:18 -04:00
52c37346aa Add mkdtemp() to libpgport.
This function is pervasive on free software operating systems; import
NetBSD's implementation.  Back-patch to 8.4, like the commit that will
harness it.
2014-06-14 09:41:18 -04:00
3fec825f9f Fix pg_restore's processing of old-style BLOB COMMENTS data.
Prior to 9.0, pg_dump handled comments on large objects by dumping a bunch
of COMMENT commands into a single BLOB COMMENTS archive object.  With
sufficiently many such comments, some of the commands would likely get
split across bufferloads when restoring, causing failures in
direct-to-database restores (though no problem would be evident in text
output).  This is the same type of issue we have with table data dumped as
INSERT commands, and it can be fixed in the same way, by using a mini SQL
lexer to figure out where the command boundaries are.  Fortunately, the
COMMENT commands are no more complex to lex than INSERTs, so we can just
re-use the existing lexer for INSERTs.

Per bug #10611 from Jacek Zalewski.  Back-patch to all active branches.
2014-06-12 20:14:52 -04:00
637264641f Remove inadvertent copyright violation in largeobject regression test.
Robert Frost is no longer with us, but his copyrights still are, so
let's stop using "Stopping by Woods on a Snowy Evening" as test data
before somebody decides to sue us.  Wordsworth is more safely dead.
2014-06-12 16:51:20 -04:00
4d5ea42904 Fix ancient encoding error in hungarian.stop.
When we grabbed this file off the Snowball project's website, we mistakenly
supposed that it was in LATIN1 encoding, but evidently it was actually in
LATIN2.  This resulted in ő (o-double-acute, U+0151, which is code 0xF5 in
LATIN2) being misconverted into õ (o-tilde, U+00F5), as complained of in
bug #10589 from Zoltán Sörös.  We'd have messed up u-double-acute too,
but there aren't any of those in the file.  Other characters used in the
file have the same codes in LATIN1 and LATIN2, which no doubt helped hide
the problem for so long.

The error is not only ours: the Snowball project also was confused about
which encoding is required for Hungarian.  But dealing with that will
require source-code changes that I'm not at all sure we'll wish to
back-patch.  Fixing the stopword file seems reasonably safe to back-patch
however.
2014-06-10 22:48:59 -04:00
8940970c0c Fix VACUUM/ANALYZE label mixup in HS regression test.
This fix is a part of commit e2f02ed64e
which fixed the same problem on 9.3, 9.2 and 9.1.
2014-06-06 19:11:18 +09:00
c9f4618e5b Add defenses against running with a wrong selection of LOBLKSIZE.
It's critical that the backend's idea of LOBLKSIZE match the way data has
actually been divided up in pg_largeobject.  While we don't provide any
direct way to adjust that value, doing so is a one-line source code change
and various people have expressed interest recently in changing it.  So,
just as with TOAST_MAX_CHUNK_SIZE, it seems prudent to record the value in
pg_control and cross-check that the backend's compiled-in setting matches
the on-disk data.

Also tweak the code in inv_api.c so that fetches from pg_largeobject
explicitly verify that the length of the data field is not more than
LOBLKSIZE.  Formerly we just had Asserts() for that, which is no protection
at all in production builds.  In some of the call sites an overlength data
value would translate directly to a security-relevant stack clobber, so it
seems worth one extra runtime comparison to be sure.

In the back branches, we can't change the contents of pg_control; but we
can still make the extra checks in inv_api.c, which will offer some amount
of protection against running with the wrong value of LOBLKSIZE.
2014-06-05 11:31:22 -04:00
037c6fb9f0 Fix longstanding bug in HeapTupleSatisfiesVacuum().
HeapTupleSatisfiesVacuum() didn't properly discern between
DELETE_IN_PROGRESS and INSERT_IN_PROGRESS for rows that have been
inserted in the current transaction and deleted in a aborted
subtransaction of the current backend. At the very least that caused
problems for CLUSTER and CREATE INDEX in transactions that had
aborting subtransactions producing rows, leading to warnings like:
WARNING:  concurrent delete in progress within table "..."
possibly in an endless, uninterruptible, loop.

Instead of treating *InProgress xmins the same as *IsCurrent ones,
treat them as being distinct like the other visibility routines. As
implemented this separatation can cause a behaviour change for rows
that have been inserted and deleted in another, still running,
transaction. HTSV will now return INSERT_IN_PROGRESS instead of
DELETE_IN_PROGRESS for those. That's both, more in line with the other
visibility routines and arguably more correct. The latter because a
INSERT_IN_PROGRESS will make callers look at/wait for xmin, instead of
xmax.
The only current caller where that's possibly worse than the old
behaviour is heap_prune_chain() which now won't mark the page as
prunable if a row has concurrently been inserted and deleted. That's
harmless enough.

As a cautionary measure also insert a interrupt check before the gotos
in IndexBuildHeapScan() that lead to the uninterruptible loop. There
are other possible causes, like a row that several sessions try to
update and all fail, for repeated loops and the cost of doing so in
the retry case is low.

As this bug goes back all the way to the introduction of
subtransactions in 573a71a5da backpatch to all supported releases.

Reported-By: Sandro Santilli
2014-06-04 23:27:10 +02:00
4f725bbc4e On OS X, link libpython normally, ignoring the "framework" framework.
As of Xcode 5.0, Apple isn't including the Python framework as part of the
SDK-level files, which means that linking to it might fail depending on
whether Xcode thinks you've selected a specific SDK version.  According to
their Tech Note 2328, they've basically deprecated the framework method of
linking to libpython and are telling people to link to the shared library
normally.  (I'm pretty sure this is in direct contradiction to the advice
they were giving a few years ago, but whatever.)  Testing says that this
approach works fine at least as far back as OS X 10.4.11, so let's just
rip out the framework special case entirely.  We do still need a special
case to decide that OS X provides a shared library at all, unfortunately
(I wonder why the distutils check doesn't work ...).  But this is still
less of a special case than before, so it's fine.

Back-patch to all supported branches, since we'll doubtless be hearing
about this more as more people update to recent Xcode.
2014-05-30 18:18:28 -04:00
b2f6754d21 When using the OSSP UUID library, cache its uuid_t state object.
The original coding in contrib/uuid-ossp created and destroyed a uuid_t
object (or, in some cases, even two of them) each time it was called.
This is not the intended usage: you're supposed to keep the uuid_t object
around so that the library can cache its state across uses.  (Other UUID
libraries seem to keep equivalent state behind-the-scenes in static
variables, but OSSP chose differently.)  Aside from being quite inefficient,
creating a new uuid_t loses knowledge of the previously generated UUID,
which in theory could result in duplicate V1-style UUIDs being created
on sufficiently fast machines.

On at least some platforms, creating a new uuid_t also draws some entropy
from /dev/urandom, leaving less for the rest of the system.  This seems
sufficiently unpleasant to justify back-patching this change.
2014-05-29 13:51:15 -04:00
534cce2fc8 Revert "Fix bogus %name-prefix option syntax in all our Bison files."
This reverts commit a670f5ed1a.

It turns out that the %name-prefix syntax without "=" does not work
at all in pre-2.4 Bison.  We are not prepared to make such a large
jump in minimum required Bison version just to suppress a warning
message in a version hardly any developers are using yet.
When 3.0 gets more popular, we'll figure out a way to deal with this.
In the meantime, BISONFLAGS=-Wno-deprecated is recommendable for
anyone using 3.0 who doesn't want to see the warning.
2014-05-28 19:29:53 -04:00
a670f5ed1a Fix bogus %name-prefix option syntax in all our Bison files.
%name-prefix doesn't use an "=" sign according to the Bison docs, but it
silently accepted one anyway, until Bison 3.0.  This was originally a
typo of mine in commit 012abebab1, and we
seem to have slavishly copied the error into all the other grammar files.

Per report from Vik Fearing; analysis by Peter Eisentraut.

Back-patch to all active branches, since somebody might try to build
a back branch with up-to-date tools.
2014-05-28 15:42:04 -04:00
c8186b3c22 Avoid unportable usage of sscanf(UINT64_FORMAT).
On Mingw, it seems that scanf() doesn't necessarily accept the same format
codes that printf() does, and in particular it may fail to recognize %llu
even though printf() does.  Since configure only probes printf() behavior
while setting up the INT64_FORMAT macros, this means it's unsafe to use
those macros with scanf().  We had only one instance of such a coding
pattern, in contrib/pg_stat_statements, so change that code to avoid
the problem.

Per buildfarm warnings.  Back-patch to 9.0 where the troublesome code
was introduced.

Michael Paquier
2014-05-26 22:23:42 -04:00
42279b291a Use 0-based numbering in comments about backup blocks.
The macros and functions that work with backup blocks in the redo function
use 0-based numbering, so let's use that consistently in the function that
generates the records too. Makes it so much easier to compare the
generation and replay functions.

Backpatch to 9.0, where we switched from 1-based to 0-based numbering.
2014-05-19 13:27:44 +03:00
0fc9434075 Initialize tsId and dbId fields in WAL record of COMMIT PREPARED.
Commit dd428c79 added dbId and tsId to the xl_xact_commit struct but missed
that prepared transaction commits reuse that struct. Fix that.

Because those fields were left unitialized, replaying a commit prepared WAL
record in a hot standby node would fail to remove the relcache init file.
That can lead to "could not open file" errors on the standby. Relcache init
file only needs to be removed when a system table/index is rewritten in the
transaction using two phase commit, so that should be rare in practice. In
HEAD, the incorrect dbId/tsId values are also used for filtering in logical
replication code, causing the transaction to always be filtered out.

Analysis and fix by Andres Freund. Backpatch to 9.0 where hot standby was
introduced.
2014-05-16 10:06:10 +03:00
9fb8cd25fd Fix unportable setvbuf() usage in initdb.
In yesterday's commit 2dc4f011fd, I tried
to force buffering of stdout/stderr in initdb to be what it is by
default when the program is run interactively on Unix (since that's how
most manual testing is done).  This tripped over the fact that Windows
doesn't support _IOLBF mode.  We dealt with that a long time ago in
syslogger.c by falling back to unbuffered mode on Windows.  Export that
solution in port.h and use it in initdb.

Back-patch to 8.4, like the previous commit.
2014-05-15 15:58:09 -04:00
f1f3bcb7c4 Handle duplicate XIDs in txid_snapshot.
The proc array can contain duplicate XIDs, when a transaction is just being
prepared for two-phase commit. To cope, remove any duplicates in
txid_current_snapshot(). Also ignore duplicates in the input functions, so
that if e.g. you have an old pg_dump file that already contains duplicates,
it will be accepted.

Report and fix by Jan Wieck. Backpatch to all supported versions.
2014-05-15 18:31:14 +03:00
82a83ec684 Fix race condition in preparing a transaction for two-phase commit.
To lock a prepared transaction's shared memory entry, we used to mark it
with the XID of the backend. When the XID was no longer active according
to the proc array, the entry was implicitly considered as not locked
anymore. However, when preparing a transaction, the backend's proc array
entry was cleared before transfering the locks (and some other state) to
the prepared transaction's dummy PGPROC entry, so there was a window where
another backend could finish the transaction before it was in fact fully
prepared.

To fix, rewrite the locking mechanism of global transaction entries. Instead
of an XID, just have simple locked-or-not flag in each entry (we store the
locking backend's backend id rather than a simple boolean, but that's just
for debugging purposes). The backend is responsible for explicitly unlocking
the entry, and to make sure that that happens, install a callback to unlock
it on abort or process exit.

Backpatch to all supported versions.
2014-05-15 17:08:49 +03:00
5e798475cc In initdb, ensure stdout/stderr buffering behavior is what we expect.
Since this program may print to either stdout or stderr, the relative
ordering of its messages depends on the buffering behavior of those files.
Force stdout to be line-buffered and stderr to be unbuffered, ensuring
that the behavior will match standard Unix interactive behavior, even
when stdout and stderr are rerouted to a file.

Per complaint from Tomas Vondra.  The particular case he pointed out is
new in HEAD, but issues of the same sort could arise in any branch with
other error messages, so back-patch to all branches.

I'm unsure whether we might not want to do this in other client programs
as well.  For the moment, just fix initdb.
2014-05-14 21:14:06 -04:00
c87c43f087 Initialize padding bytes in btree_gist varbit support.
The code expands a varbit gist leaf key to a node key by copying the bit
data twice in a varlen datum, as both the lower and upper key. The lower key
was expanded to INTALIGN size, but the padding bytes were not initialized.
That's a problem because when the lower/upper keys are compared, the padding
bytes are used compared too, when the values are otherwise equal. That could
lead to incorrect query results.

REINDEX is advised for any btree_gist indexes on bit or bit varying data
type, to fix any garbage padding bytes on disk.

Per Valgrind, reported by Andres Freund. Backpatch to all supported
versions.
2014-05-13 15:27:36 +03:00
9cb9d75246 Ignore config.pl and buildenv.pl in src/tools/msvc.
config.pl and buildenv.pl can be used to customize build settings when
using MSVC.  They should never get committed into the common source tree.

Back-patch to 9.0; it looks like the rules were different in 8.4.

Michael Paquier
2014-05-12 14:24:42 -04:00
d23bf2e6c8 Accept tcl 8.6 in configure's probe for tclsh.
Usually the search would find plain "tclsh" without any trouble,
but some installations might only have the version-numbered flavor
of that program.

No compatibility problems have been reported with 8.6, so we might
as well back-patch this to all active branches.

Christoph Berg
2014-05-10 10:48:14 -04:00
6b2a1445ec Document permissions needed for pg_database_size and pg_tablespace_size.
Back in 8.3, we installed permissions checks in these functions (see
commits 8bc225e799 and cc26599b72).  But we forgot to document that
anywhere in the user-facing docs; it did get mentioned in the 8.3 release
notes, but nobody's looking at that any more.  Per gripe from Suya Huang.
2014-05-08 21:45:29 -04:00
f483b15f1a Un-break ecpg test suite under --disable-integer-datetimes.
Commit 4318daecc9 broke it.  The change in
sub-second precision at extreme dates is normal.  The inconsistent
truncation vs. rounding is essentially a bug, albeit a longstanding one.
Back-patch to 8.4, like the causative commit.
2014-05-08 19:29:32 -04:00
8b4efe1f3a Protect against torn pages when deleting GIN list pages.
To-be-deleted list pages contain no useful information, as they are being
deleted, but we must still protect the writes from being torn by a crash
after a partial write. To do that, re-initialize the pages on WAL replay.

Jeff Janes caught this with a test program to test partial writes.
Backpatch to all supported versions.
2014-05-08 14:44:06 +03:00
77e6628276 Avoid buffer bloat in libpq when server is consistently faster than client.
If the server sends a long stream of data, and the server + network are
consistently fast enough to force the recv() loop in pqReadData() to
iterate until libpq's input buffer is full, then upon processing the last
incomplete message in each bufferload we'd usually double the buffer size,
due to supposing that we didn't have enough room in the buffer to finish
collecting that message.  After filling the newly-enlarged buffer, the
cycle repeats, eventually resulting in an out-of-memory situation (which
would be reported misleadingly as "lost synchronization with server").
Of course, we should not enlarge the buffer unless we still need room
after discarding already-processed messages.

This bug dates back quite a long time: pqParseInput3 has had the behavior
since perhaps 2003, getCopyDataMessage at least since commit 70066eb1a1
in 2008.  Probably the reason it's not been isolated before is that in
common environments the recv() loop would always be faster than the server
(if on the same machine) or faster than the network (if not); or at least
it wouldn't be slower consistently enough to let the buffer ramp up to a
problematic size.  The reported cases involve Windows, which perhaps has
different timing behavior than other platforms.

Per bug #7914 from Shin-ichi Morita, though this is different from his
proposed solution.  Back-patch to all supported branches.
2014-05-07 21:38:47 -04:00
7f66ade71a Fix failure to set ActiveSnapshot while rewinding a cursor.
ActiveSnapshot needs to be set when we call ExecutorRewind because some
plan node types may execute user-defined functions during their ReScan
calls (nodeLimit.c does so, at least).  The wisdom of that is somewhat
debatable, perhaps, but for now the simplest fix is to make sure the
required context is valid.  Failure to do this typically led to a
null-pointer-dereference core dump, though it's possible that in more
complex cases a function could be executed with the wrong snapshot
leading to very subtle misbehavior.

Per report from Leif Jensen.  It's been broken for a long time, so
back-patch to all active branches.
2014-05-07 14:25:25 -04:00
8398f3736c Fix interval test, which was broken for floating-point timestamps.
Commit 4318daecc9 introduced a test that
couldn't be made consistent between integer and floating-point
timestamps.

It was designed to test the longest possible interval output length,
so removing four zeros from the number of hours, as this patch does,
is not ideal. But the test still has some utility for its original
purpose, and there aren't a lot of other good options.

Noah Misch suggested a different approach where we test that the
output either matches what we expect from integer timestamps or what
we expect from floating-point timestamps. That seemed to obscure an
otherwise simple test, however.

Reviewed by Tom Lane and Noah Misch.
2014-05-06 21:38:00 -07:00
1d033d3054 Remove tabs after spaces in C comments
This was not changed in HEAD, but will be done later as part of a
pgindent run.  Future pgindent runs will also do this.

Report by Tom Lane

Backpatch through all supported branches, but not HEAD
2014-05-06 11:26:25 -04:00