1
0
mirror of https://github.com/postgres/postgres.git synced 2025-06-11 20:28:21 +03:00
Commit Graph

2724 Commits

Author SHA1 Message Date
76943f54a7 Make dblink try harder to form useful error messages
When libpq encounters a connection-level error, e.g. runs out of memory
while forming a result, there will be no error associated with PGresult,
but a message will be placed into PGconn's error buffer. postgres_fdw
takes care to use the PGconn error message when PGresult does not have
one, but dblink has been negligent in that regard. Modify dblink to mirror
what postgres_fdw has been doing.

Back-patch to all supported branches.

Author: Joe Conway
Reviewed-By: Tom Lane
Discussion: https://postgr.es/m/02fa2d90-2efd-00bc-fefc-c23c00eb671e%40joeconway.com
2016-12-22 09:47:36 -08:00
cb687e0acf Protect dblink from invalid options when using postgres_fdw server
When dblink uses a postgres_fdw server name for its connection, it
is possible for the connection to have options that are invalid
with dblink (e.g. "updatable"). The recommended way to avoid this
problem is to use dblink_fdw servers instead. However there are use
cases for using postgres_fdw, and possibly other FDWs, for dblink
connection options, therefore protect against trying to use any
options that do not apply by using is_valid_dblink_option() when
building the connection string from the options.

Back-patch to 9.3. Although 9.2 supports FDWs for connection info,
is_valid_dblink_option() did not yet exist, and neither did
postgres_fdw, at least in the postgres source tree. Given the lack
of previous complaints, fixing that seems too invasive/not worth it.

Author: Corey Huinker
Reviewed-By: Joe Conway
Discussion: https://postgr.es/m/CADkLM%3DfWyXVEyYcqbcRnxcHutkP45UHU9WD7XpdZaMfe7S%3DRwA%40mail.gmail.com
2016-12-22 09:19:08 -08:00
2807080fb9 Give a useful error message if uuid-ossp is built without preconfiguration.
Before commit b8cc8f947, it was possible to build contrib/uuid-ossp without
having told configure you meant to; you could just cd into that directory
and "make".  That no longer works because the code depends on configure to
have done header and library probes, but the ensuing error messages are
not so easy to interpret if you're not an old C hand.  We've gotten a
couple of complaints recently from people trying to do this the low-tech
way, so add an explicit #error directing the user to use --with-uuid.

(In principle we might want to do something similar in the other
optionally-built contrib modules; but I don't think any of the others have
ever worked without preconfiguration, so there are no bad habits to break
people of.)

Back-patch to 9.4 where the previous commit came in.

Report: https://postgr.es/m/CAHeEsBf42AWTnk=1qJvFv+mYgRFm07Knsfuc86Ono8nRjf3tvQ@mail.gmail.com
Report: https://postgr.es/m/CAKYdkBrUaZX+F6KpmzoHqMtiUqCtAW_w6Dgvr6F0WTiopuGxow@mail.gmail.com
2016-12-22 11:19:23 -05:00
0f5b1867c2 Improve dblink error message when remote does not provide it
When dblink or postgres_fdw detects an error on the remote side of the
connection, it will try to construct a local error message as best it
can using libpq's PQresultErrorField(). When no primary message is
available, it was bailing out with an unhelpful "unknown error". Make
that message better and more style guide compliant. Per discussion
on hackers.

Backpatch to 9.2 except postgres_fdw which didn't exist before 9.3.

Discussion: https://postgr.es/m/19872.1482338965%40sss.pgh.pa.us
2016-12-21 15:51:52 -08:00
20c27fdfd3 In contrib/uuid-ossp, #include headers needed for ntohl() and ntohs().
Oversight in commit b8cc8f947.  I just noticed this causes compiler
warnings on FreeBSD, and it really ought to cause warnings elsewhere too:
all references I can find say that <arpa/inet.h> is required for these.
We have a lot of code elsewhere that thinks that both <netinet/in.h>
and <arpa/inet.h> should be included for these functions, so do it that
way here too, even though <arpa/inet.h> ought to be sufficient according
to the references I consulted.

Back-patch to 9.4 where the previous commit landed.
2016-12-17 22:24:44 -05:00
4bc8b7bbdf Make contrib/test_decoding regression tests safe for CZ locale.
A little COLLATE "C" goes a long way.

Pavel Stehule, per suggestion from Craig Ringer

Discussion: <CAFj8pRA8nJZcozgxN=RMSqMmKuHVOkcGAAKPKdFeiMWGDSUDLA@mail.gmail.com>
2016-11-21 20:39:28 -05:00
07172b20f1 Correct logical decoding restore behaviour for subtransactions.
Before initializing iteration over a subtransaction's changes, the last
few changes were not spilled to disk. That's correct if the transaction
didn't spill to disk, but otherwise... This bug can lead to missed or
misorderd subtransaction contents when they were spilled to disk.

Move spilling of the remaining in-memory changes to
ReorderBufferIterTXNInit(), where it can easily be applied to the top
transaction and, if present, subtransactions.

Since this code had too many bugs already, noticeably increase test
coverage.

Fixes: #14319
Reported-By: Huan Ruan
Discussion: <20160909012610.20024.58169@wrigleys.postgresql.org>
Backport: 9,4-, where logical decoding was added
2016-10-03 22:14:12 -07:00
da3f71a080 Retry opening new segments in pg_xlogdump --folllow
There is a small window between when the server closes out the existing
segment and the new one is created. Put a loop around the open call in
this case to make sure we wait for the new file to actually appear.
2016-09-30 11:22:49 +02:00
a4daf83199 worker_spi: Call pgstat_report_stat.
Without this, statistics changes accumulated by the worker never get
reported to the stats collector, which is bad.

Julien Rouhaud
2016-09-28 12:40:17 -04:00
f16d4a2415 Fix pgbench's calculation of average latency, when -T is not used.
If the test duration was given in # of transactions (-t or no option),
rather as a duration (-T), the latency average was always printed as 0.
It has been broken ever since the display of latency average was added,
in 9.4.

Fabien Coelho

Discussion: <alpine.DEB.2.20.1607131015370.7486@sto>
2016-09-21 13:16:24 +03:00
476945c451 Fix latency calculation when there are \sleep commands in the script.
We can't use txn_scheduled to hold the sleep-until time for \sleep, because
that interferes with calculation of the latency of the transaction as whole.

Backpatch to 9.4, where this bug was introduced.

Fabien COELHO

Discussion: <alpine.DEB.2.20.1608231622170.7102@lancre>
2016-09-19 22:59:44 +03:00
1336bd9862 pg_buffercache: Allow huge allocations.
Otherwise, users who have configured shared_buffers >= 256GB won't
be able to use this module.  There probably aren't many of those, but
it doesn't hurt anything to fix it so that it works.

Backpatch to 9.4, where MemoryContextAllocHuge was introduced.  The
same problem exists in older branches, but there's no easy way to
fix it there.

KaiGai Kohei
2016-09-15 09:22:52 -04:00
3fa8ec44d1 Fix -e option in contrib/intarray/bench/bench.pl.
As implemented, -e ran an EXPLAIN but then discarded the output, which
certainly seems pointless.  Make it print to stdout instead.  It's been
like that forever, so back-patch to all supported branches.

Daniel Gustafsson, reviewed by Andreas Scherbaum

Patch: <B97BDCB7-A3B3-4734-90B5-EDD586941629@yesql.se>
2016-08-17 15:51:10 -04:00
55a3a6e3c1 Fix typo 2016-08-09 19:08:24 -04:00
254eb04f17 Obstruct shell, SQL, and conninfo injection via database and role names.
Due to simplistic quoting and confusion of database names with conninfo
strings, roles with the CREATEDB or CREATEROLE option could escalate to
superuser privileges when a superuser next ran certain maintenance
commands.  The new coding rule for PQconnectdbParams() calls, documented
at conninfo_array_parse(), is to pass expand_dbname=true and wrap
literal database names in a trivial connection string.  Escape
zero-length values in appendConnStrVal().  Back-patch to 9.1 (all
supported versions).

Nathan Bossart, Michael Paquier, and Noah Misch.  Reviewed by Peter
Eisentraut.  Reported by Nathan Bossart.

Security: CVE-2016-5424
2016-08-08 10:07:51 -04:00
c63588feb7 Don't propagate a null subtransaction snapshot up to parent transaction.
This oversight could cause logical decoding to fail to decode an outer
transaction containing changes, if a subtransaction had an XID but no
actual changes.  Per bug #14279 from Marko Tiikkaja.  Patch by Marko
based on analysis by Andrew Gierth.

Discussion: <20160804191757.1430.39011@wrigleys.postgresql.org>
2016-08-07 13:15:55 -04:00
66f7e40812 Fix assorted fallout from IS [NOT] NULL patch.
Commits 4452000f3 et al established semantics for NullTest.argisrow that
are a bit different from its initial conception: rather than being merely
a cache of whether we've determined the input to have composite type,
the flag now has the further meaning that we should apply field-by-field
testing as per the standard's definition of IS [NOT] NULL.  If argisrow
is false and yet the input has composite type, the construct instead has
the semantics of IS [NOT] DISTINCT FROM NULL.  Update the comments in
primnodes.h to clarify this, and fix ruleutils.c and deparse.c to print
such cases correctly.  In the case of ruleutils.c, this merely results in
cosmetic changes in EXPLAIN output, since the case can't currently arise
in stored rules.  However, it represents a live bug for deparse.c, which
would formerly have sent a remote query that had semantics different
from the local behavior.  (From the user's standpoint, this means that
testing a remote nested-composite column for null-ness could have had
unexpected recursive behavior much like that fixed in 4452000f3.)

In a related but somewhat independent fix, make plancat.c set argisrow
to false in all NullTest expressions constructed to represent "attnotnull"
constructs.  Since attnotnull is actually enforced as a simple null-value
check, this is a more accurate representation of the semantics; we were
previously overpromising what it meant for composite columns, which might
possibly lead to incorrect planner optimizations.  (It seems that what the
SQL spec expects a NOT NULL constraint to mean is an IS NOT NULL test, so
arguably we are violating the spec and should fix attnotnull to do the
other thing.  If we ever do, this part should get reverted.)

Back-patch, same as the previous commit.

Discussion: <10682.1469566308@sss.pgh.pa.us>
2016-07-28 16:09:15 -04:00
8b018cb1f9 Register atexit hook only once in pg_upgrade.
start_postmaster() registered stop_postmaster_atexit as an atexit(3)
callback each time through, although the obvious intention was to do
so only once per program run.  The extra registrations were harmless,
so long as we didn't exceed ATEXIT_MAX, but still it's a bug.

Artur Zakirov, with bikeshedding by Kyotaro Horiguchi and me

Discussion: <d279e817-02b5-caa6-215f-cfb05dce109a@postgrespro.ru>
2016-07-28 11:39:10 -04:00
1156a9db8d Make contrib regression tests safe for Danish locale.
In btree_gin and citext, avoid some not-particularly-interesting
dependencies on the sorting of 'aa'.  In tsearch2, use COLLATE "C" to
remove an uninteresting dependency on locale sort order (and thereby
allow removal of a variant expected-file).

Also, in citext, avoid assuming that lower('I') = 'i'.  This isn't relevant
to Danish but it does fail in Turkish.
2016-07-21 16:52:36 -04:00
97d741650d Use correct symbol for minimum int64 value
The old code used SEQ_MINVALUE to get the smallest int64 value.  This
was done as a convenience to avoid having to deal with INT64_IS_BUSTED,
but that is obsolete now.  Also, it is incorrect because the smallest
int64 value is actually SEQ_MINVALUE-1.  Fix by writing out the constant
the long way, as it is done elsewhere in the code.
2016-07-17 09:37:33 -04:00
e63895b6e4 Add missing newline in error message 2016-07-11 13:54:11 +02:00
61b24fef89 Fix handling of multixacts predating pg_upgrade
After pg_upgrade, it is possible that some tuples' Xmax have multixacts
corresponding to the old installation; such multixacts cannot have
running members anymore.  In many code sites we already know not to read
them and clobber them silently, but at least when VACUUM tries to freeze
a multixact or determine whether one needs freezing, there's an attempt
to resolve it to its member transactions by calling GetMultiXactIdMembers,
and if the multixact value is "in the future" with regards to the
current valid multixact range, an error like this is raised:
    ERROR:  MultiXactId 123 has not been created yet -- apparent wraparound
and vacuuming fails.  Per discussion with Andrew Gierth, it is completely
bogus to try to resolve multixacts coming from before a pg_upgrade,
regardless of where they stand with regards to the current valid
multixact range.

It's possible to get from under this problem by doing SELECT FOR UPDATE
of the problem tuples, but if tables are large, this is slow and
tedious, so a more thorough solution is desirable.

To fix, we realize that multixacts in xmax created in 9.2 and previous
have a specific bit pattern that is never used in 9.3 and later (we
already knew this, per comments and infomask tests sprinkled in various
places, but we weren't leveraging this knowledge appropriately).
Whenever the infomask of the tuple matches that bit pattern, we just
ignore the multixact completely as if Xmax wasn't set; or, in the case
of tuple freezing, we act as if an unwanted value is set and clobber it
without decoding.  This guarantees that no errors will be raised, and
that the values will be progressively removed until all tables are
clean.  Most callers of GetMultiXactIdMembers are patched to recognize
directly that the value is a removable "empty" multixact and avoid
calling GetMultiXactIdMembers altogether.

To avoid changing the signature of GetMultiXactIdMembers() in back
branches, we keep the "allow_old" boolean flag but rename it to
"from_pgupgrade"; if the flag is true, we always return an empty set
instead of looking up the multixact.  (I suppose we could remove the
argument in the master branch, but I chose not to do so in this commit).

This was broken all along, but the error-facing message appeared first
because of commit 8e9a16ab8f and was partially fixed in a25c2b7c4d.
This fix, backpatched all the way back to 9.3, goes approximately in the
same direction as a25c2b7c4d but should cover all cases.

Bug analysis by Andrew Gierth and Álvaro Herrera.

A number of public reports match this bug:
  https://www.postgresql.org/message-id/20140330040029.GY4582@tamriel.snowman.net
  https://www.postgresql.org/message-id/538F3D70.6080902@publicrelay.com
  https://www.postgresql.org/message-id/556439CF.7070109@pscs.co.uk
  https://www.postgresql.org/message-id/SG2PR06MB0760098A111C88E31BD4D96FB3540@SG2PR06MB0760.apcprd06.prod.outlook.com
  https://www.postgresql.org/message-id/20160615203829.5798.4594@wrigleys.postgresql.org
2016-06-24 18:29:28 -04:00
ea2ae830d9 Ensure plan stability in contrib/btree_gist regression test.
Buildfarm member skink failed with symptoms suggesting that an
auto-analyze had happened and changed the plan displayed for a
test query.  Although this is evidently of low probability,
regression tests that sometimes fail are no fun, so add commands
to force a bitmap scan to be chosen.
2016-05-12 20:04:40 -04:00
e1aecebc04 Fix pg_upgrade to not fail when new-cluster TOAST rules differ from old.
This patch essentially reverts commit 4c6780fd17, in favor of a much
simpler solution for the case where the new cluster would choose to create
a TOAST table but the old cluster doesn't have one: just don't create a
TOAST table.

The existing code failed in at least two different ways if the situation
arose: (1) ALTER TABLE RESET didn't grab an exclusive lock, so that the
lock sanity check in create_toast_table failed; (2) pg_upgrade did not
provide a pg_type OID for the new toast table, so that the crosscheck in
TypeCreate failed.  While both these problems were introduced by later
patches, they show that the hack being used to cause TOAST table creation
is overwhelmingly fragile (and untested).  I also note that before the
TypeCreate crosscheck was added, the code would have resulted in assigning
an indeterminate pg_type OID to the toast table, possibly causing a later
OID conflict in that catalog; so that it didn't really work even when
committed.

If we simply don't create a TOAST table, there will only be a problem if
the code tries to store a tuple that's wider than a page, and field
compression isn't sufficient to get it under a page.  Given that the TOAST
creation threshold is intended to be about a quarter of a page, it's very
hard to believe that cross-version differences in the do-we-need-a-toast-
table heuristic could result in an observable problem.  So let's just
follow the old version's conclusion about whether a TOAST table is needed.

(If we ever do change needs_toast_table() so much that this conclusion
doesn't apply, we can devise a solution at that time, and hopefully do
it in a less klugy way than 4c6780fd17 did.)

Back-patch to 9.3, like the previous patch.

Discussion: <8110.1462291671@sss.pgh.pa.us>
2016-05-06 22:05:51 -04:00
a840c14286 Remove unused macros.
CHECK_PAGE_OFFSET_RANGE() has been unused forever.
CHECK_RELATION_BLOCK_RANGE() has been unused in pgstatindex.c ever since
bt_page_stats() and bt_page_items() functions were moved from pgstattuple
to pageinspect module. It still exists in pageinspect/btreefuncs.c.

Daniel Gustafsson
2016-05-02 10:09:01 +03:00
00456911f4 Fix core dump in ReorderBufferRestoreChange on alignment-picky platforms.
When re-reading an update involving both an old tuple and a new tuple from
disk, reorderbuffer.c was careless about whether the new tuple is suitably
aligned for direct access --- in general, it isn't.  We'd missed seeing
this in the buildfarm because the contrib/test_decoding tests exercise this
code path only a few times, and by chance all of those cases have old
tuples with length a multiple of 4, which is usually enough to make the
access to the new tuple's t_len safe.  For some still-not-entirely-clear
reason, however, Debian's sparc build gets a bus error, as reported by
Christoph Berg; perhaps it's assuming 8-byte alignment of the pointer?

The lack of previous field reports is probably because you need all of
these conditions to trigger a crash: an alignment-picky platform (not
Intel), a transaction large enough to spill to disk, an update within
that xact that changes a primary-key field and has an odd-length old tuple,
and of course logical decoding tracing the transaction.

Avoid the alignment assumption by using memcpy instead of fetching t_len
directly, and add a test case that exposes the crash on picky platforms.
Back-patch to 9.4 where the bug was introduced.

Discussion: <20160413094117.GC21485@msg.credativ.de>
2016-04-14 19:42:22 -04:00
b190fe95bb Fix phony .PHONY.
A couple makefiles had misspelled the magic .PHONY target as PHONY.
2016-03-19 17:19:37 -04:00
23cb32660c Fix "pg_bench -C -M prepared".
This didn't work because when we dropped and re-established a database
connection, we did not bother to reset session-specific state such as
the statements-are-prepared flags.

The st->prepared[] array certainly needs to be flushed, and I cleared a
couple of other fields as well that couldn't possibly retain meaningful
state for a new connection.

In passing, fix some bogus comments and strange field order choices.

Per report from Robins Tharakan.
2016-03-16 23:18:07 -04:00
4f37d09169 Avoid unlikely data-loss scenarios due to rename() without fsync.
Renaming a file using rename(2) is not guaranteed to be durable in face
of crashes. Use the previously added durable_rename()/durable_link_or_rename()
in various places where we previously just renamed files.

Most of the changed call sites are arguably not critical, but it seems
better to err on the side of too much durability.  The most prominent
known case where the previously missing fsyncs could cause data loss is
crashes at the end of a checkpoint. After the actual checkpoint has been
performed, old WAL files are recycled. When they're filled, their
contents are fdatasynced, but we did not fsync the containing
directory. An OS/hardware crash in an unfortunate moment could then end
up leaving that file with its old name, but new content; WAL replay
would thus not replay it.

Reported-By: Tomas Vondra
Author: Michael Paquier, Tomas Vondra, Andres Freund
Discussion: 56583BDD.9060302@2ndquadrant.com
Backpatch: All supported branches
2016-03-09 18:53:54 -08:00
8457c69fea ltree: Zero padding bytes when allocating memory for externally visible data.
ltree/ltree_gist/ltxtquery's headers stores data at MAXALIGN alignment,
requiring some padding bytes. So far we left these uninitialized. Zero
those by using palloc0.

Author: Andres Freund
Reported-By: Andres Freund / valgrind / buildarm animal skink
Backpatch: 9.1-
2016-03-08 14:59:29 -08:00
3b94b3a496 logical decoding: Fix handling of large old tuples with replica identity full.
When decoding the old version of an UPDATE or DELETE change, and if that
tuple was bigger than MaxHeapTupleSize, we either Assert'ed out, or
failed in more subtle ways in non-assert builds.  Normally individual
tuples aren't bigger than MaxHeapTupleSize, with big datums toasted.
But that's not the case for the old version of a tuple for logical
decoding; the replica identity is logged as one piece. With the default
replica identity btree limits that to small tuples, but that's not the
case for FULL.

Change the tuple buffer infrastructure to separate allocate over-large
tuples, instead of always going through the slab cache.

This unfortunately requires changing the ReorderBufferTupleBuf
definition, we need to store the allocated size someplace. To avoid
requiring output plugins to recompile, don't store HeapTupleHeaderData
directly after HeapTupleData, but point to it via t_data; that leaves
rooms for the allocated size.  As there's no reason for an output plugin
to look at ReorderBufferTupleBuf->t_data.header, remove the field. It
was just a minor convenience having it directly accessible.

Reported-By: Adam Dratwiński
Discussion: CAKg6ypLd7773AOX4DiOGRwQk1TVOQKhNwjYiVjJnpq8Wo+i62Q@mail.gmail.com
2016-03-05 18:02:20 -08:00
a50f50a652 logical decoding: old/newtuple in spooled UPDATE changes was switched around.
Somehow I managed to flip the order of restoring old & new tuples when
de-spooling a change in a large transaction from disk. This happens to
only take effect when a change is spooled to disk which has old/new
versions of the tuple. That only is the case for UPDATEs where he
primary key changed or where replica identity is changed to FULL.

The tests didn't catch this because either spooled updates, or updates
that changed primary keys, were tested; not both at the same time.

Found while adding tests for the following commit.

Backpatch: 9.4, where logical decoding was added
2016-03-05 18:02:20 -08:00
465dd92d98 logical decoding: Tell reorderbuffer about all xids.
Logical decoding's reorderbuffer keeps transactions in an LSN ordered
list for efficiency. To make that's efficiently possible upper-level
xids are forced to be logged before nested subtransaction xids.  That
only works though if these records are all looked at: Unfortunately we
didn't do so for e.g. row level locks, which are otherwise uninteresting
for logical decoding.

This could lead to errors like:
"ERROR: subxact logged without previous toplevel record".

It's not sufficient to just look at row locking records, the xid could
appear first due to a lot of other types of records (which will trigger
the transaction to be marked logged with MarkCurrentTransactionIdLoggedIfAny).
So invent infrastructure to tell reorderbuffer about xids seen, when
they'd otherwise not pass through reorderbuffer.c.

Reported-By: Jarred Ward
Bug: #13844
Discussion: 20160105033249.1087.66040@wrigleys.postgresql.org
Backpatch: 9.4, where logical decoding was added
2016-03-05 18:02:20 -08:00
d750b20101 Force synchronous_commit=on in test_decoding's concurrent_ddl_dml.spec.
Otherwise running installcheck-force on a server with
synchronous_commit=off will result in the tests failing. All the other
tests already do so...

Backpatch: 9.4, where logical decoding was added
2016-03-03 17:22:25 -08:00
bec4d0ffb2 pg_upgrade: suppress creation of delete script
Suppress creation of the pg_upgrade delete script when the new data
directory is inside the old data directory.

Reported-by: IRC

Backpatch-through: 9.3, where delete script tests were added
2016-02-18 18:32:26 -05:00
b7547166fe Fix multiple bugs in contrib/pgstattuple's pgstatindex() function.
Dead or half-dead index leaf pages were incorrectly reported as live, as a
consequence of a code rearrangement I made (during a moment of severe brain
fade, evidently) in commit d287818eb5.

The index metapage was not counted in index_size, causing that result to
not agree with the actual index size on-disk.

Index root pages were not counted in internal_pages, which is inconsistent
compared to the case of a root that's also a leaf (one-page index), where
the root would be counted in leaf_pages.  Aside from that inconsistency,
this could lead to additional transient discrepancies between the reported
page counts and index_size, since it's possible for pgstatindex's scan to
see zero or multiple pages marked as BTP_ROOT, if the root moves due to
a split during the scan.  With these fixes, index_size will always be
exactly one page more than the sum of the displayed page counts.

Also, the index_size result was incorrectly documented as being measured in
pages; it's always been measured in bytes.  (While fixing that, I couldn't
resist doing some small additional wordsmithing on the pgstattuple docs.)

Including the metapage causes the reported index_size to not be zero for
an empty index.  To preserve the desired property that the pgstattuple
regression test results are platform-independent (ie, BLCKSZ configuration
independent), scale the index_size result in the regression tests.

The documentation issue was reported by Otsuka Kenji, and the inconsistent
root page counting by Peter Geoghegan; the other problems noted by me.
Back-patch to all supported branches, because this has been broken for
a long time.
2016-02-18 15:40:35 -05:00
ccbb01ff4a pgbench: avoid FD_ISSET on an invalid file descriptor
The original code wasn't careful to test the file descriptor returned by
PQsocket() for an invalid socket.  If an invalid socket did turn up,
that would amount to calling FD_ISSET with fd = -1, whereby undefined
behavior can be invoked.

To fix, test file descriptor for validity and stop further processing if
that fails.

Problem noticed by Coverity.

There is an existing FD_ISSET callsite that does check for invalid
sockets beforehand, but the error message reported by it was
strerror(errno); in testing the aforementioned change, that turns out to
result in "bad socket: Success" which isn't terribly helpful.  Instead
use PQerrorMessage() in both places which is more likely to contain an
useful error message.

Backpatch-through: 9.1.
2016-02-15 20:33:43 -03:00
2099b911d7 postgres_fdw: Avoid possible misbehavior when RETURNING tableoid column only.
deparseReturningList ended up adding up RETURNING NULL to the code, but
code elsewhere saw an empty list of attributes and concluded that it
should not expect tuples from the remote side.

Etsuro Fujita and Robert Haas, reviewed by Thom Brown
2016-02-04 22:27:47 -05:00
c33d1a8d52 pgbench: Install guard against overflow when dividing by -1.
Commit 64f5edca24 fixed the same hazard
on master; this is a backport, but the modulo operator does not exist
in older releases.

Michael Paquier
2016-02-03 09:15:29 -05:00
aa223a037b Fix IsValidJsonNumber() to notice trailing non-alphanumeric garbage.
Commit e09996ff8d was one brick shy of a load: it didn't insist
that the detected JSON number be the whole of the supplied string.
This allowed inputs such as "2016-01-01" to be misdetected as valid JSON
numbers.  Per bug #13906 from Dmitry Ryabov.

In passing, be more wary of zero-length input (I'm not sure this can
happen given current callers, but better safe than sorry), and do some
minor cosmetic cleanup.
2016-02-03 01:39:08 -05:00
fc5d5e9de7 Fix spelling mistake.
Same patch submitted independently by David Rowley and Peter Geoghegan.
2016-01-14 23:15:27 -05:00
8c558b2e96 Sort $(wildcard) output where needed for reproducible build output.
The order of inclusion of .o files makes a difference in linker output;
not a functional difference, but still a bitwise difference, which annoys
some packagers who would like reproducible builds.

Report and patch by Christoph Berg
2016-01-05 15:47:05 -05:00
0a29cf693d Add forgotten CHECK_FOR_INTERRUPT calls in pgcrypto's crypt()
Both Blowfish and DES implementations of crypt() can take arbitrarily
long time, depending on the number of rounds specified by the caller;
make sure they can be interrupted.

Author: Andreas Karlsson
Reviewer: Jeff Janes

Backpatch to 9.1.
2015-12-27 13:03:19 -03:00
f91c4e326a pg_upgrade: fix CopyFile() on Windows to fail on file existence
Also fix getErrorText() to return the right error string on failure.
This behavior now matches that of other operating systems.

Report by Noah Misch

Backpatch through 9.1
2015-11-24 17:18:28 -05:00
87cdfeb18a pg_upgrade: properly detect file copy failure on Windows
Previously, file copy failures were ignored on Windows due to an
incorrect return value check.

Report by Manu Joye

Backpatch through 9.1
2015-11-14 11:47:11 -05:00
bed3f6d035 Prevent stack overflow in query-type functions.
The tsquery, ltxtquery and query_int data types have a common ancestor.
Having acquired check_stack_depth() calls independently, each was
missing at least one call.  Back-patch to 9.0 (all supported versions).
2015-10-05 10:06:34 -04:00
4d95419e8a pgcrypto: Detect and report too-short crypt() salts.
Certain short salts crashed the backend or disclosed a few bytes of
backend memory.  For existing salt-induced error conditions, emit a
message saying as much.  Back-patch to 9.0 (all supported versions).

Josh Kupershmidt

Security: CVE-2015-5288
2015-10-05 10:06:34 -04:00
93840f96c7 Improve contrib/pg_stat_statements' handling of garbage collection failure.
If we can't read the query texts file (whether because out-of-memory, or
for some other reason), give up and reset the file to empty, discarding all
stored query texts, though not the statistics per se.  We used to leave
things alone and hope for better luck next time, but the problem is that
the file is only going to get bigger and even harder to slurp into memory.
Better to do something that will get us out of trouble.

Likewise reset the file to empty for any other failure within gc_qtexts().
The previous behavior after a write error was to discard query texts but
not do anything to truncate the file, which is just weird.

Also, increase the maximum supported file size from MaxAllocSize to
MaxAllocHugeSize; this makes it more likely we'll be able to do a garbage
collection successfully.

Also, fix recalculation of mean_query_len within entry_dealloc() to match
the calculation in gc_qtexts().  The previous coding overlooked the
possibility of dropped texts (query_len == -1) and would underestimate the
mean of the remaining entries in such cases, thus possibly causing excess
garbage collection cycles.

In passing, add some errdetail to the log entry that complains about
insufficient memory to read the query texts file, which after all was
Jim Nasby's original complaint.

Back-patch to 9.4 where the current handling of query texts was
introduced.

Peter Geoghegan, rather editorialized upon by me
2015-10-04 17:58:30 -04:00
99557984bc Improve errhint() about replication slot naming restrictions.
The existing hint talked about "may only contain letters", but the
actual requirement is more strict: only lower case letters are allowed.

Reported-By: Rushabh Lathia
Author: Rushabh Lathia
Discussion: AGPqQf2x50qcwbYOBKzb4x75sO_V3g81ZsA8+Ji9iN5t_khFhQ@mail.gmail.com
Backpatch: 9.4-, where replication slots were added
2015-10-03 15:29:28 +02:00
0da864c53c Improve handling of collations in contrib/postgres_fdw.
If we have a local Var of say varchar type with default collation, and
we apply a RelabelType to convert that to text with default collation, we
don't want to consider that as creating an FDW_COLLATE_UNSAFE situation.
It should be okay to compare that to a remote Var, so long as the remote
Var determines the comparison collation.  (When we actually ship such an
expression to the remote side, the local Var would become a Param with
default collation, meaning the remote Var would in fact control the
comparison collation, because non-default implicit collation overrides
default implicit collation in parse_collate.c.)  To fix, be more precise
about what FDW_COLLATE_NONE means: it applies either to a noncollatable
data type or to a collatable type with default collation, if that collation
can't be traced to a remote Var.  (When it can, FDW_COLLATE_SAFE is
appropriate.)  We were essentially using that interpretation already at
the Var/Const/Param level, but we weren't bubbling it up properly.

An alternative fix would be to introduce a separate FDW_COLLATE_DEFAULT
value to describe the second situation, but that would add more code
without changing the actual behavior, so it didn't seem worthwhile.

Also, since we're clarifying the rule to be that we care about whether
operator/function input collations match, there seems no need to fail
immediately upon seeing a Const/Param/non-foreign-Var with nondefault
collation.  We only have to reject if it appears in a collation-sensitive
context (for example, "var IS NOT NULL" is perfectly safe from a collation
standpoint, whatever collation the var has).  So just set the state to
UNSAFE rather than failing immediately.

Per report from Jeevan Chalke.  This essentially corrects some sloppy
thinking in commit ed3ddf918b, so back-patch
to 9.3 where that logic appeared.
2015-09-24 12:47:30 -04:00