1
0
mirror of https://github.com/postgres/postgres.git synced 2025-05-28 05:21:27 +03:00

35059 Commits

Author SHA1 Message Date
Tom Lane
528db6ab96 Improve documentation about CREATE INDEX CONCURRENTLY.
Clarify the description of which transactions will block a CREATE INDEX
CONCURRENTLY command from proceeding, and mention that the index might
still not be usable after CREATE INDEX completes.  (This happens if the
index build detected broken HOT chains, so that pg_index.indcheckxmin gets
set, and there are open old transactions preventing the xmin horizon from
advancing past the index's initial creation.  I didn't want to explain what
broken HOT chains are, though, so I omitted an explanation of exactly when
old transactions prevent the index from being used.)

Per discussion with Chris Travers.  Back-patch to all supported branches,
since the same text appears in all of them.
2016-02-16 13:43:03 -05:00
Tatsuo Ishii
779c46a39f Improve wording in the planner doc
Change "In this case" to "In the example above" to clarify what it
actually refers to.
2016-02-16 15:38:12 +09:00
Fujii Masao
8a96651caa Correct the formulas for System V IPC parameters SEMMNI and SEMMNS in docs.
In runtime.sgml, the old formulas for calculating the reasonable
values of SEMMNI and SEMMNS were incorrect. They have forgotten to
count the number of semaphores which both the checkpointer process
(introduced in 9.2) and the background worker processes (introduced
in 9.3) need.

This commit fixes those formulas so that they count the number of
semaphores which the checkpointer process and the background worker
processes need.

Report and patch by Kyotaro Horiguchi. Only the patch for 9.3 was
modified by me. Back-patch to 9.2 where the checkpointer process was
added and the number of needed semaphores was increased.

Author: Kyotaro Horiguchi
Reviewed-by: Fujii Masao
Backpatch: 9.2
Discussion: http://www.postgresql.org/message-id/20160203.125119.66820697.horiguchi.kyotaro@lab.ntt.co.jp
2016-02-16 15:05:30 +09:00
Alvaro Herrera
daac53779a pgbench: avoid FD_ISSET on an invalid file descriptor
The original code wasn't careful to test the file descriptor returned by
PQsocket() for an invalid socket.  If an invalid socket did turn up,
that would amount to calling FD_ISSET with fd = -1, whereby undefined
behavior can be invoked.

To fix, test file descriptor for validity and stop further processing if
that fails.

Problem noticed by Coverity.

There is an existing FD_ISSET callsite that does check for invalid
sockets beforehand, but the error message reported by it was
strerror(errno); in testing the aforementioned change, that turns out to
result in "bad socket: Success" which isn't terribly helpful.  Instead
use PQerrorMessage() in both places which is more likely to contain an
useful error message.

Backpatch-through: 9.1.
2016-02-15 20:33:43 -03:00
Tom Lane
e781baa640 Suppress compiler warnings about useless comparison of unsigned to zero.
Reportedly, some compilers warn about tests like "c < 0" if c is unsigned,
and hence complain about the character range checks I added in commit
3bb3f42f3749d40b8d4de65871e8d828b18d4a45.  This is a bit of a pain since
the regex library doesn't really want to assume that chr is unsigned.
However, since any such reconfiguration would involve manual edits of
regcustom.h anyway, we can put it on the shoulders of whoever wants to
do that to adjust this new range-checking macro correctly.

Per gripes from Coverity and Andres.
2016-02-15 17:11:52 -05:00
Noah Misch
4421b5253f Accept pg_ctl timeout from the PGCTLTIMEOUT environment variable.
Many automated test suites call pg_ctl.  Buildfarm members axolotl,
hornet, mandrill, shearwater, sungazer and tern have failed when server
shutdown took longer than the pg_ctl default 60s timeout.  This addition
permits slow hosts to easily raise the timeout without us editing a
--timeout argument into every test suite pg_ctl call.  Back-patch to 9.1
(all supported versions) for the sake of automated testing.

Reviewed by Tom Lane.
2016-02-10 20:34:41 -05:00
Tom Lane
64f99a2ee3 Avoid use of sscanf() to parse ispell dictionary files.
It turns out that on FreeBSD-derived platforms (including OS X), the
*scanf() family of functions is pretty much brain-dead about multibyte
characters.  In particular it will apply isspace() to individual bytes
of input even when those bytes are part of a multibyte character, thus
allowing false recognition of a field-terminating space.

We appear to have little alternative other than instituting a coding
rule that *scanf() is not to be used if the input string might contain
multibyte characters.  (There was some discussion of relying on "%ls",
but that probably just moves the portability problem somewhere else,
and besides it doesn't fully prevent BSD *scanf() from using isspace().)

This patch is a down payment on that: it gets rid of use of sscanf()
to parse ispell dictionary files, which are certainly at great risk
of having a problem.  The code is cleaner this way anyway, though
a bit longer.

In passing, improve a few comments.

Report and patch by Artur Zakirov, reviewed and somewhat tweaked by me.
Back-patch to all supported branches.
2016-02-10 19:30:12 -05:00
Tom Lane
a060605801 Stamp 9.2.15. REL9_2_15 2016-02-08 16:19:37 -05:00
Peter Eisentraut
6a10dd0800 Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: e640b67db0ef2d766eb6b5ec60dc3ba5ed4e2ede
2016-02-08 14:42:59 -05:00
Tom Lane
9dafc4130d Last-minute updates for release notes.
Security: CVE-2016-0773
2016-02-08 10:49:38 -05:00
Tom Lane
e93516cf7f Fix some regex issues with out-of-range characters and large char ranges.
Previously, our regex code defined CHR_MAX as 0xfffffffe, which is a
bad choice because it is outside the range of type "celt" (int32).
Characters approaching that limit could lead to infinite loops in logic
such as "for (c = a; c <= b; c++)" where c is of type celt but the
range bounds are chr.  Such loops will work safely only if CHR_MAX+1
is representable in celt, since c must advance to beyond b before the
loop will exit.

Fortunately, there seems no reason not to restrict CHR_MAX to 0x7ffffffe.
It's highly unlikely that Unicode will ever assign codes that high, and
none of our other backend encodings need characters beyond that either.

In addition to modifying the macro, we have to explicitly enforce character
range restrictions on the values of \u, \U, and \x escape sequences, else
the limit is trivially bypassed.

Also, the code for expanding case-independent character ranges in bracket
expressions had a potential integer overflow in its calculation of the
number of characters it could generate, which could lead to allocating too
small a character vector and then overwriting memory.  An attacker with the
ability to supply arbitrary regex patterns could easily cause transient DOS
via server crashes, and the possibility for privilege escalation has not
been ruled out.

Quite aside from the integer-overflow problem, the range expansion code was
unnecessarily inefficient in that it always produced a result consisting of
individual characters, abandoning the knowledge that we had a range to
start with.  If the input range is large, this requires excessive memory.
Change it so that the original range is reported as-is, and then we add on
any case-equivalent characters that are outside that range.  With this
approach, we can bound the number of individual characters allowed without
sacrificing much.  This patch allows at most 100000 individual characters,
which I believe to be more than the number of case pairs existing in
Unicode, so that the restriction will never be hit in practice.

It's still possible for range() to take awhile given a large character code
range, so also add statement-cancel detection to its loop.  The downstream
function dovec() also lacked cancel detection, and could take a long time
given a large output from range().

Per fuzz testing by Greg Stark.  Back-patch to all supported branches.

Security: CVE-2016-0773
2016-02-08 10:25:40 -05:00
Tom Lane
ddcc256caf Improve documentation about PRIMARY KEY constraints.
Get rid of the false implication that PRIMARY KEY is exactly equivalent to
UNIQUE + NOT NULL.  That was more-or-less true at one time in our
implementation, but the standard doesn't say that, and we've grown various
features (many of them required by spec) that treat a pkey differently from
less-formal constraints.  Per recent discussion on pgsql-general.

I failed to resist the temptation to do some other wordsmithing in the
same area.
2016-02-07 16:02:44 -05:00
Tom Lane
3d6c988890 Release notes for 9.5.1, 9.4.6, 9.3.11, 9.2.15, 9.1.20. 2016-02-07 14:16:32 -05:00
Noah Misch
de9766d39d Force certain "pljava" custom GUCs to be PGC_SUSET.
Future PL/Java versions will close CVE-2016-0766 by making these GUCs
PGC_SUSET.  This PostgreSQL change independently mitigates that PL/Java
vulnerability, helping sites that update PostgreSQL more frequently than
PL/Java.  Back-patch to 9.1 (all supported versions).
2016-02-05 20:23:14 -05:00
Tom Lane
32f17a2e7d Update time zone data files to tzdata release 2016a.
DST law changes in Cayman Islands, Metlakatla, Trans-Baikal Territory
(Zabaykalsky Krai).  Historical corrections for Pakistan.
2016-02-05 10:59:35 -05:00
Tom Lane
4f58a70039 In pg_dump, ensure that view triggers are processed after view rules.
If a view is split into CREATE TABLE + CREATE RULE to break a circular
dependency, then any triggers on the view must be dumped/reloaded after
the CREATE RULE; else the backend may reject the CREATE TRIGGER because
it's the wrong type of trigger for a plain table.  This works all right
in plain dump/restore because of pg_dump's sorting heuristic that places
triggers after rules.  However, when using parallel restore, the ordering
must be enforced by a dependency --- and we didn't have one.

Fixing this is a mere matter of adding an addObjectDependency() call,
except that we need to be able to find all the triggers belonging to the
view relation, and there was no easy way to do that.  Add fields to
pg_dump's TableInfo struct to remember where the associated TriggerInfo
struct(s) are.

Per bug report from Dennis Kögel.  The failure can be exhibited at least
as far back as 9.1, so back-patch to all supported branches.
2016-02-04 00:26:10 -05:00
Robert Haas
b63a4f418f pgbench: Install guard against overflow when dividing by -1.
Commit 64f5edca2401f6c2f23564da9dd52e92d08b3a20 fixed the same hazard
on master; this is a backport, but the modulo operator does not exist
in older releases.

Michael Paquier
2016-02-03 09:21:44 -05:00
Michael Meskes
d9ce5d201d Make sure ecpg header files do not have a comment lasting several lines, one of
which is a preprocessor directive. This leads ecpg to incorrectly parse the comment as nested.
2016-02-01 13:19:34 +01:00
Andrew Dunstan
555bc8b374 Fix error in documentated use of mingw-w64 compilers
Error reported by Igal Sapir.
2016-01-30 19:31:49 -05:00
Tom Lane
a362cc2e35 Fix incorrect pattern-match processing in psql's \det command.
listForeignTables' invocation of processSQLNamePattern did not match up
with the other ones that handle potentially-schema-qualified names; it
failed to make use of pg_table_is_visible() and also passed the name
arguments in the wrong order.  Bug seems to have been aboriginal in commit
0d692a0dc9f0e532.  It accidentally sort of worked as long as you didn't
inquire too closely into the behavior, although the silliness was later
exposed by inconsistencies in the test queries added by 59efda3e50ca4de6
(which I probably should have questioned at the time, but didn't).

Per bug #13899 from Reece Hart.  Patch by Reece Hart and Tom Lane.
Back-patch to all affected branches.
2016-01-29 10:28:03 +01:00
Tom Lane
3a7af9d73b Fix startup so that log prefix %h works for the log_connections message.
We entirely randomly chose to initialize port->remote_host just after
printing the log_connections message, when we could perfectly well do it
just before, allowing %h and %r to work for that message.  Per gripe from
Artem Tomyuk.
2016-01-26 15:38:33 -05:00
Bruce Momjian
49d65e857c Properly install dynloader.h on MSVC builds
This will enable PL/Java to be cleanly compiled, as dynloader.h is a
requirement.

Report by Chapman Flack

Patch by Michael Paquier

Backpatch through 9.1
2016-01-19 23:30:28 -05:00
Robert Haas
f744f395a5 Fix spelling mistake.
Same patch submitted independently by David Rowley and Peter Geoghegan.
2016-01-14 23:16:26 -05:00
Magnus Hagander
df0bd5a0f7 Properly close token in sspi authentication
We can never leak more than one token, but we shouldn't do that. We
don't bother closing it in the error paths since the process will
exit shortly anyway.

Christian Ullrich
2016-01-14 13:07:55 +01:00
Tom Lane
be2b276514 Handle extension members when first setting object dump flags in pg_dump.
pg_dump's original approach to handling extension member objects was to
run around and clear (or set) their dump flags rather late in its data
collection process.  Unfortunately, quite a lot of code expects those flags
to be valid before that; which was an entirely reasonable expectation
before we added extensions.  In particular, this explains Karsten Hilbert's
recent report of pg_upgrade failing on a database in which an extension
has been installed into the pg_catalog schema.  Its objects are initially
marked as not-to-be-dumped on the strength of their schema, and later we
change them to must-dump because we're doing a binary upgrade of their
extension; but we've already skipped essential tasks like making associated
DO_SHELL_TYPE objects.

To fix, collect extension membership data first, and incorporate it in the
initial setting of the dump flags, so that those are once again correct
from the get-go.  This has the undesirable side effect of slightly
lengthening the time taken before pg_dump acquires table locks, but testing
suggests that the increase in that window is not very much.

Along the way, get rid of ugly special-case logic for deciding whether
to dump procedural languages, FDWs, and foreign servers; dump decisions
for those are now correct up-front, too.

In 9.3 and up, this also fixes erroneous logic about when to dump event
triggers (basically, they were *always* dumped before).  In 9.5 and up,
transform objects had that problem too.

Since this problem came in with extensions, back-patch to all supported
versions.
2016-01-13 18:55:27 -05:00
Tom Lane
3843ba510b Avoid dump/reload problems when using both plpython2 and plpython3.
Commit 803716013dc1350f installed a safeguard against loading plpython2
and plpython3 at the same time, but asserted that both could still be
used in the same database, just not in the same session.  However, that's
not actually all that practical because dumping and reloading will fail
(since both libraries necessarily get loaded into the restoring session).
pg_upgrade is even worse, because it checks for missing libraries by
loading every .so library mentioned in the entire installation into one
session, so that you can have only one across the whole cluster.

We can improve matters by not throwing the error immediately in _PG_init,
but only when and if we're asked to do something that requires calling
into libpython.  This ameliorates both of the above situations, since
while execution of CREATE LANGUAGE, CREATE FUNCTION, etc will result in
loading plpython, it isn't asked to do anything interesting (at least
not if check_function_bodies is off, as it will be during a restore).

It's possible that this opens some corner-case holes in which a crash
could be provoked with sufficient effort.  However, since plpython
only exists as an untrusted language, any such crash would require
superuser privileges, making it "don't do that" not a security issue.
To reduce the hazards in this area, the error is still FATAL when it
does get thrown.

Per a report from Paul Jones.  Back-patch to 9.2, which is as far back
as the patch applies without work.  (It could be made to work in 9.1,
but given the lack of previous complaints, I'm disinclined to expend
effort so far back.  We've been pretty desultory about support for
Python 3 in 9.1 anyway.)
2016-01-11 19:55:40 -05:00
Tom Lane
0e7d084aed Clean up some lack-of-STRICT issues in the core code, too.
A scan for missed proisstrict markings in the core code turned up
these functions:

brin_summarize_new_values
pg_stat_reset_single_table_counters
pg_stat_reset_single_function_counters
pg_create_logical_replication_slot
pg_create_physical_replication_slot
pg_drop_replication_slot

The first three of these take OID, so a null argument will normally look
like a zero to them, resulting in "ERROR: could not open relation with OID
0" for brin_summarize_new_values, and no action for the pg_stat_reset_XXX
functions.  The other three will dump core on a null argument, though this
is mitigated by the fact that they won't do so until after checking that
the caller is superuser or has rolreplication privilege.

In addition, the pg_logical_slot_get/peek[_binary]_changes family was
intentionally marked nonstrict, but failed to make nullness checks on all
the arguments; so again a null-pointer-dereference crash is possible but
only for superusers and rolreplication users.

Add the missing ARGISNULL checks to the latter functions, and mark the
former functions as strict in pg_proc.  Make that change in the back
branches too, even though we can't force initdb there, just so that
installations initdb'd in future won't have the issue.  Since none of these
bugs rise to the level of security issues (and indeed the pg_stat_reset_XXX
functions hardly misbehave at all), it seems sufficient to do this.

In addition, fix some order-of-operations oddities in the slot_get_changes
family, mostly cosmetic, but not the part that moves the function's last
few operations into the PG_TRY block.  As it stood, there was significant
risk for an error to exit without clearing historical information from
the system caches.

The slot_get_changes bugs go back to 9.4 where that code was introduced.
Back-patch appropriate subsets of the pg_proc changes into all active
branches, as well.
2016-01-09 16:58:33 -05:00
Tom Lane
1eb2c4bfd1 Clean up code for widget_in() and widget_out().
Given syntactically wrong input, widget_in() could call atof() with an
indeterminate pointer argument, typically leading to a crash; or if it
didn't do that, it might return a NULL pointer, which again would lead
to a crash since old-style C functions aren't supposed to do things
that way.  Fix that by correcting the off-by-one syntax test and
throwing a proper error rather than just returning NULL.

Also, since widget_in and widget_out have been marked STRICT for a
long time, their tests for null inputs are just dead code; remove 'em.
In the oldest branches, also improve widget_out to use snprintf not
sprintf, just to be sure.

In passing, get rid of a long-since-useless sprintf into a local buffer
that nothing further is done with, and make some other minor coding
style cleanups.

In the intended regression-testing usage of these functions, none of
this is very significant; but if the regression test database were
left around in a production installation, these bugs could amount
to a minor security hazard.

Piotr Stefaniak, Michael Paquier, and Tom Lane
2016-01-09 13:44:27 -05:00
Tom Lane
55caeeee0f Add STRICT to some C functions created by the regression tests.
These functions readily crash when passed a NULL input value.  The tests
themselves do not pass NULL values to them; but when the regression
database is used as a basis for fuzz testing, they cause a lot of noise.
Also, if someone were to leave a regression database lying about in a
production installation, these would create a minor security hazard.

Andreas Seltenreich
2016-01-09 13:03:23 -05:00
Tom Lane
957e1117da Fix unobvious interaction between -X switch and subdirectory creation.
Turns out the only reason initdb -X worked is that pg_mkdir_p won't
whine if you point it at something that's a symlink to a directory.
Otherwise, the attempt to create pg_xlog/ just like all the other
subdirectories would have failed.  Let's be a little more explicit
about what's happening.  Oversight in my patch for bug #13853
(mea culpa for not testing -X ...)
2016-01-07 18:20:58 -05:00
Tom Lane
d6d6400cc0 Use plain mkdir() not pg_mkdir_p() to create subdirectories of PGDATA.
When we're creating subdirectories of PGDATA during initdb, we know darn
well that the parent directory exists (or should exist) and that the new
subdirectory doesn't (or shouldn't).  There is therefore no need to use
anything more complicated than mkdir().  Using pg_mkdir_p() just opens us
up to unexpected failure modes, such as the one exhibited in bug #13853
from Nuri Boardman.  It's not very clear why pg_mkdir_p() went wrong there,
but it is clear that we didn't need to be trying to create parent
directories in the first place.  We're not even saving any code, as proven
by the fact that this patch nets out at minus five lines.

Since this is a response to a field bug report, back-patch to all branches.
2016-01-07 15:22:01 -05:00
Alvaro Herrera
5c4cbd5d10 Windows: Make pg_ctl reliably detect service status
pg_ctl is using isatty() to verify whether the process is running in a
terminal, and if not it sends its output to Windows' Event Log ... which
does the wrong thing when the output has been redirected to a pipe, as
reported in bug #13592.

To fix, make pg_ctl use the code we already have to detect service-ness:
in the master branch, move src/backend/port/win32/security.c to src/port
(with suitable tweaks so that it runs properly in backend and frontend
environments); pg_ctl already has access to pgport so it Just Works.  In
older branches, that's likely to cause trouble, so instead duplicate the
required code in pg_ctl.c.

Author: Michael Paquier
Bug report and diagnosis: Egon Kocjan
Backpatch: all supported branches
2016-01-07 11:59:08 -03:00
Tom Lane
9b2eacba79 Fix treatment of *lpNumberOfBytesRecvd == 0: that's a completion condition.
pgwin32_recv() has treated a non-error return of zero bytes from WSARecv()
as being a reason to block ever since the current implementation was
introduced in commit a4c40f140d23cefb.  However, so far as one can tell
from Microsoft's documentation, that is just wrong: what it means is
graceful connection closure (in stream protocols) or receipt of a
zero-length message (in message protocols), and neither case should result
in blocking here.  The only reason the code worked at all was that control
then fell into the retry loop, which did *not* treat zero bytes specially,
so we'd get out after only wasting some cycles.  But as of 9.5 we do not
normally reach the retry loop and so the bug is exposed, as reported by
Shay Rojansky and diagnosed by Andres Freund.

Remove the unnecessary test on the byte count, and rearrange the code
in the retry loop so that it looks identical to the initial sequence.

Back-patch of commit 90e61df8130dc7051a108ada1219fb0680cb3eb6.  The
original plan was to apply this only to 9.5 and up, but after discussion
and buildfarm testing, it seems better to back-patch.  The noblock code
path has been at risk of this problem since it was introduced (in 9.0);
if it did happen in pre-9.5 branches, the symptom would be that a walsender
would wait indefinitely rather than noticing a loss of connection.  While
we lack proof that the case has been seen in the field, it seems possible
that it's happened without being reported.
2016-01-04 17:41:33 -05:00
Tom Lane
1eb515ad7c Teach pg_dump to quote reloption values safely.
Commit c7e27becd2e6eb93 fixed this on the backend side, but we neglected
the fact that several code paths in pg_dump were printing reloptions
values that had not gotten massaged by ruleutils.  Apply essentially the
same quoting logic in those places, too.
2016-01-02 19:04:45 -05:00
Tom Lane
5c0d6230f5 Fix overly-strict assertions in spgtextproc.c.
spg_text_inner_consistent is capable of reconstructing an empty string
to pass down to the next index level; this happens if we have an empty
string coming in, no prefix, and a dummy node label.  (In practice, what
is needed to trigger that is insertion of a whole bunch of empty-string
values.)  Then, we will arrive at the next level with in->level == 0
and a non-NULL (but zero length) in->reconstructedValue, which is valid
but the Assert tests weren't expecting it.

Per report from Andreas Seltenreich.  This has no impact in non-Assert
builds, so should not be a problem in production, but back-patch to
all affected branches anyway.

In passing, remove a couple of useless variable initializations and
shorten the code by not duplicating DatumGetPointer() calls.
2016-01-02 16:25:11 -05:00
Tom Lane
c6ab178292 Adjust back-branch release note description of commits a2a718b22 et al.
As pointed out by Michael Paquier, recovery_min_apply_delay didn't exist
in 9.0-9.3, making the release note text not very useful.  Instead make it
talk about recovery_target_xid, which did exist then.

9.0 is already out of support, but we can fix the text in the newer
branches' copies of its release notes.
2016-01-02 15:29:03 -05:00
Bruce Momjian
25b04756ea Update copyright for 2016
Backpatch certain files through 9.1
2016-01-02 13:33:39 -05:00
Tom Lane
69cfe15b57 Teach flatten_reloptions() to quote option values safely.
flatten_reloptions() supposed that it didn't really need to do anything
beyond inserting commas between reloption array elements.  However, in
principle the value of a reloption could be nearly anything, since the
grammar allows a quoted string there.  Any restrictions on it would come
from validity checking appropriate to the particular option, if any.

A reloption value that isn't a simple identifier or number could thus lead
to dump/reload failures due to syntax errors in CREATE statements issued
by pg_dump.  We've gotten away with not worrying about this so far with
the core-supported reloptions, but extensions might allow reloption values
that cause trouble, as in bug #13840 from Kouhei Sutou.

To fix, split the reloption array elements explicitly, and then convert
any value that doesn't look like a safe identifier to a string literal.
(The details of the quoting rule could be debated, but this way is safe
and requires little code.)  While we're at it, also quote reloption names
if they're not safe identifiers; that may not be a likely problem in the
field, but we might as well try to be bulletproof here.

It's been like this for a long time, so back-patch to all supported
branches.

Kouhei Sutou, adjusted some by me
2016-01-01 15:27:53 -05:00
Tom Lane
8e79b24c5d Add some more defenses against silly estimates to gincostestimate().
A report from Andy Colson showed that gincostestimate() was not being
nearly paranoid enough about whether to believe the statistics it finds in
the index metapage.  The problem is that the metapage stats (other than the
pending-pages count) are only updated by VACUUM, and in the worst case
could still reflect the index's original empty state even when it has grown
to many entries.  We attempted to deal with that by scaling up the stats to
match the current index size, but if nEntries is zero then scaling it up
still gives zero.  Moreover, the proportion of pages that are entry pages
vs. data pages vs. pending pages is unlikely to be estimated very well by
scaling if the index is now orders of magnitude larger than before.

We can improve matters by expanding the use of the rule-of-thumb estimates
I introduced in commit 7fb008c5ee59b040: if the index has grown by more
than a cutoff amount (here set at 4X growth) since VACUUM, then use the
rule-of-thumb numbers instead of scaling.  This might not be exactly right
but it seems much less likely to produce insane estimates.

I also improved both the scaling estimate and the rule-of-thumb estimate
to account for numPendingPages, since it's reasonable to expect that that
is accurate in any case, and certainly pages that are in the pending list
are not either entry or data pages.

As a somewhat separate issue, adjust the estimation equations that are
concerned with extra fetches for partial-match searches.  These equations
suppose that a fraction partialEntries / numEntries of the entry and data
pages will be visited as a consequence of a partial-match search.  Now,
it's physically impossible for that fraction to exceed one, but our
estimate of partialEntries is mostly bunk, and our estimate of numEntries
isn't exactly gospel either, so we could arrive at a silly value.  In the
example presented by Andy we were coming out with a value of 100, leading
to insane cost estimates.  Clamp the fraction to one to avoid that.

Like the previous patch, back-patch to all supported branches; this
problem can be demonstrated in one form or another in all of them.
2016-01-01 13:42:43 -05:00
Tom Lane
7adbde26ab Document the exponentiation operator as associating left to right.
Common mathematical convention is that exponentiation associates right to
left.  We aren't going to change the parser for this, but we could note
it in the operator's description.  (It's already noted in the operator
precedence/associativity table, but users might not look there.)
Per bug #13829 from Henrik Pauli.
2015-12-28 12:09:32 -05:00
Alvaro Herrera
4fb9e6109a Fix translation domain in pg_basebackup
For some reason, we've been overlooking the fact that pg_receivexlog
and pg_recvlogical are using wrong translation domains all along,
so their output hasn't ever been translated.  The right domain is
pg_basebackup, not their own executable names.

Noticed by Ioseph Kim, who's been working on the Korean translation.

Backpatch pg_receivexlog to 9.2 and pg_recvlogical to 9.4.
2015-12-28 10:50:35 -03:00
Alvaro Herrera
51dd54ba79 Add forgotten CHECK_FOR_INTERRUPT calls in pgcrypto's crypt()
Both Blowfish and DES implementations of crypt() can take arbitrarily
long time, depending on the number of rounds specified by the caller;
make sure they can be interrupted.

Author: Andreas Karlsson
Reviewer: Jeff Janes

Backpatch to 9.1.
2015-12-27 13:03:19 -03:00
Alvaro Herrera
f9643d0d6d Rework internals of changing a type's ownership
This is necessary so that REASSIGN OWNED does the right thing with
composite types, to wit, that it also alters ownership of the type's
pg_class entry -- previously, the pg_class entry remained owned by the
original user, which caused later other failures such as the new owner's
inability to use ALTER TYPE to rename an attribute of the affected
composite.  Also, if the original owner is later dropped, the pg_class
entry becomes owned by a non-existant user which is bogus.

To fix, create a new routine AlterTypeOwner_oid which knows whether to
pass the request to ATExecChangeOwner or deal with it directly, and use
that in shdepReassignOwner rather than calling AlterTypeOwnerInternal
directly.  AlterTypeOwnerInternal is now simpler in that it only
modifies the pg_type entry and recurses to handle a possible array type;
higher-level tasks are handled by either AlterTypeOwner directly or
AlterTypeOwner_oid.

I took the opportunity to add a few more objects to the test rig for
REASSIGN OWNED, so that more cases are exercised.  Additional ones could
be added for superuser-only-ownable objects (such as FDWs and event
triggers) but I didn't want to push my luck by adding a new superuser to
the tests on a backpatchable bug fix.

Per bug #13666 reported by Chris Pacejo.

This is a backpatch of commit 756e7b4c9db1 to branches 9.1 -- 9.4.
2015-12-21 19:49:15 -03:00
Alvaro Herrera
653530c8b1 some bullshit 2015-12-21 19:31:04 -03:00
Alvaro Herrera
7af3dd540e adjust ACL owners for REASSIGN and ALTER OWNER TO
When REASSIGN and ALTER OWNER TO are used, both the object owner and ACL
list should be changed from the old owner to the new owner. This patch
fixes types, foreign data wrappers, and foreign servers to change their
ACL list properly;  they already changed owners properly.

Report by Alexey Bashtanov

This is a backpatch of commit 59367fdf97c (for bug #9923) by Bruce
Momjian to branches 9.1 - 9.4; it wasn't backpatched originally out of
concerns that it would create a backwards compatibility problem, but per
discussion related to bug #13666 that turns out to have been misguided.
(Therefore, the entry in the 9.5 release notes should be removed.)

Note that 9.1 didn't have privileges on types (which were introduced by
commit 729205571e81), so this commit only changes foreign-data related
objects in that branch.

Discussion: http://www.postgresql.org/message-id/20151216224004.GL2618@alvherre.pgsql
	http://www.postgresql.org/message-id/10227.1450373793@sss.pgh.pa.us
2015-12-21 19:16:15 -03:00
Tom Lane
6ecd7f5010 Remove silly completion for "DELETE FROM tabname ...".
psql offered USING, WHERE, and SET in this context, but SET is not a valid
possibility here.  Seems to have been a thinko in commit f5ab0a14ea83eb6c
which added DELETE's USING option.
2015-12-20 18:29:51 -05:00
Tom Lane
b417779886 Fix improper initialization order for readline.
Turns out we must set rl_basic_word_break_characters *before* we call
rl_initialize() the first time, because it will quietly copy that value
elsewhere --- but only on the first call.  (Love these undocumented
dependencies.)  I broke this yesterday in commit 2ec477dc8108339d;
like that commit, back-patch to all active branches.  Per report from
Pavel Stehule.
2015-12-17 16:55:47 -05:00
Tom Lane
bcce4a5e3a Cope with Readline's failure to track SIGWINCH events outside of input.
It emerges that libreadline doesn't notice terminal window size change
events unless they occur while collecting input.  This is easy to stumble
over if you resize the window while using a pager to look at query output,
but it can be demonstrated without any pager involvement.  The symptom is
that queries exceeding one line are misdisplayed during subsequent input
cycles, because libreadline has the wrong idea of the screen dimensions.

The safest, simplest way to fix this is to call rl_reset_screen_size()
just before calling readline().  That causes an extra ioctl(TIOCGWINSZ)
for every command; but since it only happens when reading from a tty, the
performance impact should be negligible.  A more valid objection is that
this still leaves a tiny window during entry to readline() wherein delivery
of SIGWINCH will be missed; but the practical consequences of that are
probably negligible.  In any case, there doesn't seem to be any good way to
avoid the race, since readline exposes no functions that seem safe to call
from a generic signal handler --- rl_reset_screen_size() certainly isn't.

It turns out that we also need an explicit rl_initialize() call, else
rl_reset_screen_size() dumps core when called before the first readline()
call.

rl_reset_screen_size() is not present in old versions of libreadline,
so we need a configure test for that.  (rl_initialize() is present at
least back to readline 4.0, so we won't bother with a test for it.)
We would need a configure test anyway since libedit's emulation of
libreadline doesn't currently include such a function.  Fortunately,
libedit seems not to have any corresponding bug.

Merlin Moncure, adjusted a bit by me
2015-12-16 16:58:56 -05:00
Alvaro Herrera
48a7074a27 Add missing CHECK_FOR_INTERRUPTS in lseg_inside_poly
Apparently, there are bugs in this code that cause it to loop endlessly.
That bug still needs more research, but in the meantime it's clear that
the loop is missing a check for interrupts so that it can be cancelled
timely.

Backpatch to 9.1 -- this has been missing since 49475aab8d0d.
2015-12-14 16:44:40 -03:00
Heikki Linnakangas
1e23caae30 Fix out-of-memory error handling in ParameterDescription message processing.
If libpq ran out of memory while constructing the result set, it would hang,
waiting for more data from the server, which might never arrive. To fix,
distinguish between out-of-memory error and not-enough-data cases, and give
a proper error message back to the client on OOM.

There are still similar issues in handling COPY start messages, but let's
handle that as a separate patch.

Michael Paquier, Amit Kapila and me. Backpatch to all supported versions.
2015-12-14 18:41:11 +02:00