1
0
mirror of https://github.com/postgres/postgres.git synced 2025-04-20 00:42:27 +03:00

44388 Commits

Author SHA1 Message Date
Michael Paquier
d72a7e4da1 Fix buffer overflow when processing SCRAM final message in libpq
When a client connects to a rogue server sending specifically-crafted
messages, this can suffice to execute arbitrary code as the operating
system account used by the client.

While on it, fix one error handling when decoding an incorrect salt
included in the first message received from server.

Author: Michael Paquier
Reviewed-by: Jonathan Katz, Heikki Linnakangas
Security: CVE-2019-10164
Backpatch-through: 10
2019-06-17 22:14:09 +09:00
Michael Paquier
90adc16ea1 Fix buffer overflow when parsing SCRAM verifiers in backend
Any authenticated user can overflow a stack-based buffer by changing the
user's own password to a purpose-crafted value.  This often suffices to
execute arbitrary code as the PostgreSQL operating system account.

This fix is contributed by multiple folks, based on an initial analysis
from Tom Lane.  This issue has been introduced by 68e61ee, so it was
possible to make use of it at authentication time.  It became more
easily to trigger after ccae190 which has made the SCRAM parsing more
strict when changing a password, in the case where the client passes
down a verifier already hashed using SCRAM.  Back-patch to v10 where
SCRAM has been introduced.

Reported-by: Alexander Lakhin
Author: Jonathan Katz, Heikki Linnakangas, Michael Paquier
Security: CVE-2019-10164
Backpatch-through: 10
2019-06-17 21:48:34 +09:00
Alvaro Herrera
93d4484ef8 Revert "Avoid spurious deadlocks when upgrading a tuple lock"
This reverts commits 3da73d6839dc and de87a084c0a5.

This code has some tricky corner cases that I'm not sure are correct and
not properly tested anyway, so I'm reverting the whole thing for next
week's releases (reintroducing the deadlock bug that we set to fix).
I'll try again afterwards.

Discussion: https://postgr.es/m/E1hbXKQ-0003g1-0C@gemulon.postgresql.org
2019-06-16 22:24:20 -04:00
Tom Lane
0e45e52b51 Release notes for 10.9, 9.6.14, 9.5.18, 9.4.23.
(11.4 notes are already done.)
2019-06-16 15:39:08 -04:00
Andrew Gierth
2913a892e1 Prefer timezone name "UTC" over alternative spellings.
tzdb 2019a made "UCT" a link to the "UTC" zone rather than a separate
zone with its own abbreviation. Unfortunately, our code for choosing a
timezone in initdb has an arbitrary preference for names earlier in
the alphabet, and so it would choose the spelling "UCT" over "UTC"
when the system is running on a UTC zone.

Commit 23bd3cec6 was backpatched in order to address this issue, but
that code helps only when /etc/localtime exists as a symlink, and does
nothing to help on systems where /etc/localtime is a copy of a zone
file (as is the standard setup on FreeBSD and probably some other
platforms too) or when /etc/localtime is simply absent (giving UTC as
the default).

Accordingly, add a preference for the spelling "UTC", such that if
multiple zone names have equally good content matches, we prefer that
name before applying the existing arbitrary rules. Also add a slightly
lower preference for "Etc/UTC"; lower because that preserves the
previous behaviour of choosing the shorter name, but letting us still
choose "Etc/UTC" over "Etc/UCT" when both exist but "UTC" does
not (not common, but I've seen it happen).

Backpatch all the way, because the tzdb change that sparked this issue
is in those branches too.
2019-06-15 18:18:03 +01:00
Alvaro Herrera
744639739c Silence compiler warning
Introduced in de87a084c0a5.
2019-06-14 11:33:40 -04:00
Tom Lane
8de574aa8b Attempt to identify system timezone by reading /etc/localtime symlink.
On many modern platforms, /etc/localtime is a symlink to a file within the
IANA database.  Reading the symlink lets us find out the name of the system
timezone directly, without going through the brute-force search embodied in
scan_available_timezones().  This shortens the runtime of initdb by some
tens of ms, which is helpful for the buildfarm, and it also allows us to
reliably select the same zone name the system was actually configured for,
rather than possibly choosing one of IANA's many zone aliases.  (For
example, in a system configured for "Asia/Tokyo", the brute-force search
would not choose that name but its alias "Japan", on the grounds of the
latter string being shorter.  More surprisingly, "Navajo" is preferred
to either "America/Denver" or "US/Mountain", as seen in an old complaint
from Josh Berkus.)

If /etc/localtime doesn't exist, or isn't a symlink, or we can't make
sense of its contents, or the contents match a zone we know but that
zone doesn't match the observed behavior of localtime(), fall back to
the brute-force search.

Also, tweak initdb so that it prints the zone name it selected.

In passing, replace the last few references to the "Olson" database in
code comments with "IANA", as that's been our preferred term since
commit b2cbced9e.

Back-patch of commit 23bd3cec6.  The original intention was to not
back-patch, since this can result in cosmetic behavioral changes ---
for example, on my own workstation initdb now chooses "America/New_York",
where it used to prefer "US/Eastern" which is equivalent and shorter.
However, our hand has been more or less forced by tzdb update 2019a,
which made the "UCT" zone fully equivalent to "UTC".  Our old code
now prefers "UCT" on the grounds of it being alphabetically first,
and that's making nobody happy.  Choosing the alias indicated by
/etc/localtime is a more defensible behavior.  (Users who don't like
the results can always force the decision by setting the TZ environment
variable before running initdb.)

Patch by me, per a suggestion from Robert Haas; review by Michael Paquier

Discussion: https://postgr.es/m/7408.1525812528@sss.pgh.pa.us
Discussion: https://postgr.es/m/20190604085735.GD24018@msg.df7cb.de
2019-06-14 11:25:13 -04:00
Alvaro Herrera
14a91a8fc2 Avoid spurious deadlocks when upgrading a tuple lock
When two (or more) transactions are waiting for transaction T1 to release a
tuple-level lock, and transaction T1 upgrades its lock to a higher level, a
spurious deadlock can be reported among the waiting transactions when T1
finishes.  The simplest example case seems to be:

T1: select id from job where name = 'a' for key share;
Y: select id from job where name = 'a' for update; -- starts waiting for X
Z: select id from job where name = 'a' for key share;
T1: update job set name = 'b' where id = 1;
Z: update job set name = 'c' where id = 1; -- starts waiting for X
T1: rollback;

At this point, transaction Y is rolled back on account of a deadlock: Y
holds the heavyweight tuple lock and is waiting for the Xmax to be released,
while Z holds part of the multixact and tries to acquire the heavyweight
lock (per protocol) and goes to sleep; once X releases its part of the
multixact, Z is awakened only to be put back to sleep on the heavyweight
lock that Y is holding while sleeping.  Kaboom.

This can be avoided by having Z skip the heavyweight lock acquisition.  As
far as I can see, the biggest downside is that if there are multiple Z
transactions, the order in which they resume after X finishes is not
guaranteed.

Backpatch to 9.6.  The patch applies cleanly on 9.5, but the new tests don't
work there (because isolationtester is not smart enough), so I'm not going
to risk it.

Author: Oleksii Kliukin
Discussion: https://postgr.es/m/B9C9D7CD-EB94-4635-91B6-E558ACEC0EC3@hintbits.com
2019-06-13 17:28:24 -04:00
Tom Lane
945ae92c8a Mark ReplicationSlotCtl as PGDLLIMPORT.
Also MyReplicationSlot, in branches where it wasn't already.

This was discussed in the thread that resulted in c572599c6, but
for some reason nobody pulled the trigger.  Now that we have another
request for the same thing, we should just do it.

Craig Ringer

Discussion: https://postgr.es/m/CAMsr+YFTsq-86MnsNng=mPvjjh5EAbzfMK0ptJPvzyvpFARuRg@mail.gmail.com
Discussion: https://postgr.es/m/345138875.20190611151943@cybertec.at
2019-06-13 10:53:17 -04:00
Etsuro Fujita
0f2b234263 postgres_fdw: Account for triggers in non-direct remote UPDATE planning.
Previously, in postgresPlanForeignModify, we planned an UPDATE operation
on a foreign table so that we transmit only columns that were explicitly
targets of the UPDATE, so as to avoid unnecessary data transmission, but
if there were BEFORE ROW UPDATE triggers on the foreign table, those
triggers might change values for non-target columns, in which case we
would miss sending changed values for those columns.  Prevent optimizing
away transmitting all columns if there are BEFORE ROW UPDATE triggers on
the foreign table.

This is an oversight in commit 7cbe57c34 which added triggers on foreign
tables, so apply the patch all the way back to 9.4 where that came in.

Author: Shohei Mochizuki
Reviewed-by: Amit Langote
Discussion: https://postgr.es/m/201905270152.x4R1q3qi014550@toshiba.co.jp
2019-06-13 17:59:12 +09:00
Tom Lane
909a92e195 Doc: improve description of allowed spellings for Boolean input.
datatype.sgml failed to explain that boolin() accepts any unique
prefix of the basic input strings.  Indeed it was actively misleading
because it called out a few minimal prefixes without mentioning that
there were more valid inputs.

I also felt that it wasn't doing anybody any favors by conflating
SQL key words, valid Boolean input, and string literals containing
valid Boolean input.  Rewrite in hopes of reducing the confusion.

Per bug #15836 from Yuming Wang, as diagnosed by David Johnston.
Back-patch to supported branches.

Discussion: https://postgr.es/m/15836-656fab055735f511@postgresql.org
2019-06-12 22:54:46 -04:00
Tom Lane
30d3df0a7b Fix incorrect printing of queries with duplicated join names.
Given a query in which multiple JOIN nodes used the same alias
(which'd necessarily be in different sub-SELECTs), ruleutils.c
would assign the JOIN nodes distinct aliases for clarity ...
but then it forgot to print the modified aliases when dumping
the JOIN nodes themselves.  This results in a dump/reload hazard
for views, because the emitted query is flat-out incorrect:
Vars will be printed with table names that have no referent.

This has been wrong for a long time, so back-patch to all supported
branches.

Philip Dubé

Discussion: https://postgr.es/m/CY4PR2101MB080246F2955FF58A6ED1FEAC98140@CY4PR2101MB0802.namprd21.prod.outlook.com
2019-06-12 19:43:10 -04:00
David Rowley
1bbcbfaf78 doc: Fix grammatical error in partitioning docs
Reported-by: Amit Langote
Discussion: https://postgr.es/m/CA+HiwqGZFkKi0TkBGYpr2_5qrRAbHZoP47AP1BRLUOUkfQdy_A@mail.gmail.com
Backpatch-through: 10
2019-06-13 10:35:47 +12:00
Tom Lane
ac8f2e1ef3 In walreceiver, don't try to do ereport() in a signal handler.
This is quite unsafe, even for the case of ereport(FATAL) where we won't
return control to the interrupted code, and despite this code's use of
a flag to restrict the areas where we'd try to do it.  It's possible
for example that we interrupt malloc or free while that's holding a lock
that's meant to protect against cross-thread interference.  Then, any
attempt to do malloc or free within ereport() will result in a deadlock,
preventing the walreceiver process from exiting in response to SIGTERM.
We hypothesize that this explains some hard-to-reproduce failures seen
in the buildfarm.

Hence, get rid of the immediate-exit code in WalRcvShutdownHandler,
as well as the logic associated with WalRcvImmediateInterruptOK.
Instead, we need to take care that potentially-blocking operations
in the walreceiver's data transmission logic (libpqwalreceiver.c)
will respond reasonably promptly to the process's latch becoming
set and then call ProcessWalRcvInterrupts.  Much of the needed code
for that was already present in libpqwalreceiver.c.  I refactored
things a bit so that all the uses of PQgetResult use latch-aware
waiting, but didn't need to do much more.

These changes should be enough to ensure that libpqwalreceiver.c
will respond promptly to SIGTERM whenever it's waiting to receive
data.  In principle, it could block for a long time while waiting
to send data too, and this patch does nothing to guard against that.
I think that that hazard is mostly theoretical though: such blocking
should occur only if we fill the kernel's data transmission buffers,
and we don't generally send enough data to make that happen without
waiting for input.  If we find out that the hazard isn't just
theoretical, we could fix it by using PQsetnonblocking, but that
would require more ticklish changes than I care to make now.

Back-patch of commit a1a789eb5.  This problem goes all the way back
to the origins of walreceiver; but given the substantial reworking
the module received during the v10 cycle, it seems unsafe to assume
that our testing on HEAD validates this patch for pre-v10 branches.
And we'd need to back-patch some prerequisite patches (at least
597a87ccc and its followups, maybe other things), increasing the risk
of problems.  Given the dearth of field reports matching this problem,
it's not worth much risk.  Hence back-patch to v10 and v11 only.

Patch by me; thanks to Thomas Munro for review.

Discussion: https://postgr.es/m/20190416070119.GK2673@paquier.xyz
2019-06-12 17:29:48 -04:00
Tom Lane
2981e5a612 Fix ALTER COLUMN TYPE failure with a partial exclusion constraint.
ATExecAlterColumnType failed to consider the possibility that an index
that needs to be rebuilt might be a child of a constraint that needs to be
rebuilt.  We missed this so far because usually a constraint index doesn't
have a direct dependency on its table, just on the constraint object.
But if there's a WHERE clause, then dependency analysis of the WHERE
clause results in direct dependencies on the column(s) mentioned in WHERE.
This led to trying to drop and rebuild both the constraint and its
underlying index.

In v11/HEAD, we successfully drop both the index and the constraint,
and then try to rebuild both, and of course the second rebuild hits a
duplicate-index-name problem.  Before v11, it fails with obscure messages
about a missing relation OID, due to trying to drop the index twice.

This is essentially the same kind of problem noted in commit
20bef2c31: the possible dependency linkages are broader than what
ATExecAlterColumnType was designed for.  It was probably OK when
written, but it's certainly been broken since the introduction of
partial exclusion constraints.  Fix by adding an explicit check
for whether any of the indexes-to-be-rebuilt belong to any of the
constraints-to-be-rebuilt, and ignoring any that do.

In passing, fix a latent bug introduced by commit 8b08f7d48: in
get_constraint_index() we must "continue" not "break" when rejecting
a relation of a wrong relkind.  This is harmless today because we don't
expect that code path to be taken anyway; but if there ever were any
relations to be ignored, the existing coding would have an extremely
undesirable dependency on the order of pg_depend entries.

Also adjust a couple of obsolete comments.

Per bug #15835 from Yaroslav Schekin.  Back-patch to all supported
branches.

Discussion: https://postgr.es/m/15835-32d9b7a76c06a7a9@postgresql.org
2019-06-12 12:29:42 -04:00
Michael Paquier
56a932533a Fix handling of COMMENT for domain constraints
For a non-superuser, changing a comment on a domain constraint was
leading to a cache lookup failure as the code tried to perform the
ownership lookup on the constraint OID itself, thinking that it was a
type, but this check needs to happen on the type the domain constraint
relies on.  As the type a domain constraint relies on can be guessed
directly based on the constraint OID, first fetch its type OID and
perform the ownership on it.

This is broken since 7eca575, which has split the handling of comments
for table constraints and domain constraints, so back-patch down to
9.5.

Reported-by: Clemens Ladisch
Author: Daniel Gustafsson, Michael Paquier
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/15833-808e11904835d26f@postgresql.org
Backpatch-through: 9.5
2019-06-12 11:31:00 +09:00
David Rowley
6e1dc84533 doc: Add best practises section to partitioning docs
A few questionable partitioning designs have been cropping up lately
around the mailing lists.  Generally, these cases have been partitioning
using too many partitions which have caused performance or OOM problems for
the users.

Since we have very little else to guide users into good design, here we
add a new section to the partitioning documentation with some best
practise guidelines for good design.

Reviewed-by: Justin Pryzby, Amit Langote, Alvaro Herrera
Discussion: https://postgr.es/m/CAKJS1f-2rx+E9mG3xrCVHupefMjAp1+tpczQa9SEOZWyU7fjEA@mail.gmail.com
Backpatch-through: 10
2019-06-12 08:09:28 +12:00
Tom Lane
b6f5689aad Fix conversion of JSON strings to JSON output columns in json_to_record().
json_to_record(), when an output column is declared as type json or jsonb,
should emit the corresponding field of the input JSON object.  But it got
this slightly wrong when the field is just a string literal: it failed to
escape the contents of the string.  That typically resulted in syntax
errors if the string contained any double quotes or backslashes.

jsonb_to_record() handles such cases correctly, but I added corresponding
test cases for it too, to prevent future backsliding.

Improve the documentation, as it provided only a very hand-wavy
description of the conversion rules used by these functions.

Per bug report from Robert Vollmert.  Back-patch to v10 where the
error was introduced (by commit cf35346e8).

Note that PG 9.4 - 9.6 also get this case wrong, but differently so:
they feed the de-escaped contents of the string literal to json[b]_in.
That behavior is less obviously wrong, so possibly it's being depended on
in the field, so I won't risk trying to make the older branches behave
like the newer ones.

Discussion: https://postgr.es/m/D6921B37-BD8E-4664-8D5F-DB3525765DCD@vllmrt.net
2019-06-11 13:33:08 -04:00
Andres Freund
52ad5fc0a6 Don't access catalogs to validate GUCs when not connected to a DB.
Vignesh found this bug in the check function for
default_table_access_method's check hook, but that was just copied
from older GUCs. Investigation by Michael and me then found the bug in
further places.

When not connected to a database (e.g. in a walsender connection), we
cannot perform (most) GUC checks that need database access. Even when
only shared tables are needed, unless they're
nailed (c.f. RelationCacheInitializePhase2()), they cannot be accessed
without pg_class etc. being present.

Fix by extending the existing IsTransactionState() checks to also
check for MyDatabaseOid.

Reported-By: Vignesh C, Michael Paquier, Andres Freund
Author: Vignesh C, Andres Freund
Discussion: https://postgr.es/m/CALDaNm1KXK9gbZfY-p_peRFm_XrBh1OwQO1Kk6Gig0c0fVZ2uw%40mail.gmail.com
Backpatch: 9.4-
2019-06-10 23:36:48 -07:00
Alvaro Herrera
1eb8a5ea46 Make pg_dump emit ATTACH PARTITION instead of PARTITION OF (reprise)
Using PARTITION OF can result in column ordering being changed from the
database being dumped, if the partition uses a column layout different
from the parent's.  It's not pg_dump's job to editorialize on table
definitions, so this is not acceptable; back-patch all the way back to
pg10, where partitioned tables where introduced.

This change also ensures that partitions end up in the correct
tablespace, if different from the parent's; this is an oversight in
ca4103025dfe (in pg12 only).  Partitioned indexes (in pg11) don't have
this problem, because they're already created as independent indexes and
attached to their parents afterwards.

This change also has the advantage that the partition is restorable from
the dump (as a standalone table) even if its parent table isn't
restored.

The original commits (3b23552ad8bb in branch master) failed to cover
subsidiary column elements correctly, such as NOT NULL constraint and
CHECK constraints, as reported by Rushabh Lathia (initially as a failure
to restore serial columns).  They were reverted.  This recapitulation
commit fixes those problems.

Add some pg_dump tests to verify these things more exhaustively,
including constraints with legacy-inheritance tables, which were not
tested originally.  In branches 10 and 11, add a local constraint to the
pg_dump test partition that was added by commit 2d7eeb1b1492 to master.

Author: Álvaro Herrera, David Rowley
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/CAKJS1f_1c260nOt_vBJ067AZ3JXptXVRohDVMLEBmudX1YEx-A@mail.gmail.com
Discussion: https://postgr.es/m/20190423185007.GA27954@alvherre.pgsql
Discussion: https://postgr.es/m/CAGPqQf0iQV=PPOv2Btog9J9AwOQp6HmuVd6SbGTR_v3Zp2XT1w@mail.gmail.com
2019-06-10 18:56:23 -04:00
Alexander Korotkov
589f91fc31 Fix operator naming in pg_trgm GUC option descriptions
Descriptions of pg_trgm GUC options have % replaced with %% like it was
a printf-like format.  But that's not needed since they are just plain strings.
This commit fixed that.  Backpatch to last supported version since this error
present from the beginning.

Reported-by: Masahiko Sawada
Discussion: https://postgr.es/m/CAD21AoAgPKODUsu9gqUFiNqEOAqedStxJ-a0sapsJXWWAVp%3Dxg%40mail.gmail.com
Backpatch-through: 9.4
2019-06-10 20:24:00 +03:00
Alexander Korotkov
9ee98cc3fa Add docs of missing GUC to pgtrgm.sgml
be8a7a68 introduced pg_trgm.strict_word_similarity_threshold GUC, but missed
docs for that.  This commit fixes that.

Discussion: https://postgr.es/m/fc907f70-448e-fda3-3aa4-209a59597af0%402ndquadrant.com
Author: Ian Barwick
Reviewed-by: Masahiko Sawada, Michael Paquier
Backpatch-through: 9.6
2019-06-10 20:24:00 +03:00
Alexander Korotkov
7c2122f11f Fix docs indentation in pgtrgm.sgml
5871b884 introduced pg_trgm.word_similarity_threshold GUC, but its documentation
contains wrong indentation.  This commit fixes that.  Backpatch for easier
backpatching of other documentation fixes.

Discussion: https://postgr.es/m/4c735d30-ab59-fc0e-45d8-f90eb5ed3855%402ndquadrant.com
Author: Ian Barwick
Backpatch-through: 9.6
2019-06-10 20:24:00 +03:00
Heikki Linnakangas
88ca787b16 Fix copy-pasto in freeing memory on error in vacuumlo.
It's harmless to call PQfreemem() with a NULL argument, so the only
consequence was that if allocating 'schema' failed, but allocating 'table'
or 'field' succeeded, we would leak a bit of memory. That's highly
unlikely to happen, so this is just academical, but let's get it right.

Per bug #15838 from Timur Birsh. Backpatch back to 9.5, where the
PQfreemem() calls were introduced.

Discussion: https://www.postgresql.org/message-id/15838-3221652c72c5e69d@postgresql.org
2019-06-07 12:44:01 +03:00
Amit Kapila
fac1ed2742 Fix inconsistency in comments atop ExecParallelEstimate.
When this code was initially introduced in commit d1b7c1ff, the structure
used was SharedPlanStateInstrumentation, but later when it got changed to
Instrumentation structure in commit b287df70, we forgot to update the
comment.

Reported-by: Wu Fei
Author: Wu Fei
Reviewed-by: Amit Kapila
Backpatch-through: 9.6
Discussion: https://postgr.es/m/52E6E0843B9D774C8C73D6CF64402F0562215EB2@G08CNEXMBPEKD02.g08.fujitsu.local
2019-06-07 05:35:31 +05:30
Tom Lane
974a2867ea Mark a few parallelism-related variables with PGDLLIMPORT.
Back-patch commit 09a65f5a2 into the 9.6 and 10 branches.
Needed to support back-patch of commit 2cd4e8357 on Windows.

Discussion: http://postgr.es/m/20190604011354.GD1529@paquier.xyz
2019-06-03 21:25:43 -04:00
Tom Lane
ba38967d75 Fix contrib/auto_explain to not cause problems in parallel workers.
A parallel worker process should not be making any decisions of its
own about whether to auto-explain.  If the parent session process
passed down flags asking for instrumentation data, do that, otherwise
not.  Trying to enable instrumentation anyway leads to bugs like the
"could not find key N in shm TOC" failure reported in bug #15821
from Christian Hofstaedtler.

We can implement this cheaply by piggybacking on the existing logic
for not doing anything when we've chosen not to sample a statement.

While at it, clean up some tin-eared coding related to the sampling
feature, including an off-by-one error that meant that asking for 1.0
sampling rate didn't actually result in sampling every statement.

Although the specific case reported here only manifested in >= v11,
I believe that related misbehaviors can be demonstrated in any version
that has parallel query; and the off-by-one error is certainly there
back to 9.6 where that feature was added.  So back-patch to 9.6.

Discussion: https://postgr.es/m/15821-5eb422e980594075@postgresql.org
2019-06-03 18:06:04 -04:00
Michael Paquier
5b8c93c874 Fix documentation of check_option in information_schema.views
Support of CHECK OPTION for updatable views has been added in 9.4, but
the documentation of information_schema never got the call even if the
information displayed is correct.

Author: Gilles Darold
Discussion: https://postgr.es/m/75d07704-6c74-4f26-656a-10045c01a17e@darold.net
Backpatch-through: 9.4
2019-06-01 15:34:02 -04:00
Tom Lane
683c17b307 Fix C++ incompatibilities in plpgsql's header files.
Rename some exposed parameters so that they don't conflict with
C++ reserved words.

Back-patch to all supported versions.

George Tarasov

Discussion: https://postgr.es/m/b517ec3918d645eb950505eac8dd434e@gaz-is.ru
2019-05-31 12:34:54 -04:00
Tomas Vondra
39c9efc156 Make error logging in extended statistics more consistent
Most errors reported in extended statistics are internal issues, and so
should use elog(). The MCV list code was already following this rule, but
the functional dependencies and ndistinct coefficients were using a mix
of elog() and ereport(). Fix this by changing most places to elog(), with
the exception of input functions.

This is a mostly cosmetic change, it makes the life a little bit easier
for translators, as elog() messages are not translated. So backpatch to
PostgreSQL 10, where extended statistics were introduced.

Author: Tomas Vondra
Backpatch-through: 10 where extended statistics were added
Discussion: https://postgr.es/m/20190503154404.GA7478@alvherre.pgsql
2019-05-30 17:06:35 +02:00
Noah Misch
b31d88b010 MSVC: Add "use File::Path qw(rmtree)".
My back-patch of commit 10b72deafea5972edcafb9eb3f97154f32ccd340 added
calls to File::Path::rmtree(), but v10 and older had not been importing
that symbol.  Back-patch to v10, 9.6 and 9.5.
2019-05-28 19:28:36 -07:00
Noah Misch
c44e9bc3a1 In the pg_upgrade test suite, don't write to src/test/regress.
When this suite runs installcheck, redirect file creations from
src/test/regress to src/bin/pg_upgrade/tmp_check/regress.  This closes a
race condition in "make -j check-world".  If the pg_upgrade suite wrote
to a given src/test/regress/results file in parallel with the regular
src/test/regress invocation writing it, a test failed spuriously.  Even
without parallelism, in "make -k check-world", the suite finishing
second overwrote the other's regression.diffs.  This revealed test
"largeobject" assuming @abs_builddir@ is getcwd(), so fix that, too.

Buildfarm client REL_10, released fifty-four days ago, supports saving
regression.diffs from its new location.  When an older client reports a
pg_upgradeCheck failure, it will no longer include regression.diffs.
Back-patch to 9.5, where pg_upgrade moved to src/bin.

Reviewed (in earlier versions) by Andrew Dunstan.

Discussion: https://postgr.es/m/20181224034411.GA3224776@rfd.leadboat.com
2019-05-28 13:00:16 -07:00
Noah Misch
8e2b41ecf8 In the pg_upgrade test suite, remove and recreate "tmp_check".
This allows "vcregress upgradecheck" to pass twice in immediate
succession, and it's more like how $(prove_check) works.  Back-patch to
9.5, where pg_upgrade moved to src/bin.

Discussion: https://postgr.es/m/20190520012436.GA1480421@rfd.leadboat.com
2019-05-28 12:58:34 -07:00
Andres Freund
9ba3915ab6 pg_upgrade: Make test.sh's installcheck use to-be-upgraded version's bindir.
On master (after 700538) the old version's installed psql was used -
even when the old version might not actually be installed / might be
installed into a temporary directory. As commonly the case when just
executing make check for pg_upgrade, as $oldbindir is just the current
version's $bindir.

In the back branches, with --install specified, psql from the new
version's temporary installation was used, without --install (e.g for
NO_TEMP_INSTALL, cf 47b3c26642), the new version's installed psql was
used (which might or might not exist).

Author: Andres Freund
Discussion: https://postgr.es/m/20190522175150.c26f4jkqytahajdg@alap3.anarazel.de
2019-05-23 14:49:59 -07:00
Andrew Gierth
99efd8d727 Fix array size allocation for HashAggregate hash keys.
When there were duplicate columns in the hash key list, the array
sizes could be miscomputed, resulting in access off the end of the
array. Adjust the computation to ensure the array is always large
enough.

(I considered whether the duplicates could be removed in planning, but
I can't rule out the possibility that duplicate columns might have
different hash functions assigned. Simpler to just make sure it works
at execution time regardless.)

Bug apparently introduced in fc4b3dea2 as part of narrowing down the
tuples stored in the hashtable. Reported by Colm McHugh of Salesforce,
though I didn't use their patch. Backpatch back to version 10 where
the bug was introduced.

Discussion: https://postgr.es/m/CAFeeJoKKu0u+A_A9R9316djW-YW3-+Gtgvy3ju655qRHR3jtdA@mail.gmail.com
2019-05-23 15:39:17 +01:00
Michael Paquier
2ccebcd236 Fix ordering of GRANT commands in pg_dumpall for tablespaces
This uses a method similar to 68a7c24f and now b8c6014 (applied for
database creation), which guarantees that GRANT commands using the WITH
GRANT OPTION are dumped in a way so as cascading dependencies are
respected.  Note that tablespaces do not have support for initial
privileges via pg_init_privs, so the same method needs to be applied
again.  It would be nice to merge all the logic generating ACL queries
in dumps under the same banner, but this requires extending the support
of pg_init_privs to objects that cannot use it yet, so this is left as
future work.

Discussion: https://postgr.es/m/20190522071555.GB1278@paquier.xyz
Author: Michael Paquier
Reviewed-by: Nathan Bossart
Backpatch-through: 9.6
2019-05-23 10:48:29 +09:00
Michael Paquier
0c2a5a8626 Fix ordering of GRANT commands in pg_dumpall for database creation
This uses a method similar to 68a7c24f, which guarantees that GRANT
commands using the WITH GRANT OPTION are dumped in a way so as cascading
dependencies are respected.  As databases do not have support for
initial privileges via pg_init_privs, we need to repeat again the same
ACL reordering method.

ACL for databases have been moved from pg_dumpall to pg_dump in v11, so
this impacts pg_dump for v11 and above, and pg_dumpall for v9.6 and
v10.

Discussion: https://postgr.es/m/15788-4e18847520ebcc75@postgresql.org
Author: Nathan Bossart
Reviewed-by: Haribabu Kommi
Backpatch-through: 9.6
2019-05-22 14:48:30 +09:00
Michael Paquier
7f920b8f71 Fix some grammar in documentation of spgist and pgbench
Discussion: https://postgr.es/m/92961161-9b49-e42f-0a72-d5d47e0ed4de@postgrespro.ru
Author: Liudmila Mantrova
Reviewed-by: Jonathan Katz, Tom Lane, Michael Paquier
Backpatch-through: 9.4
2019-05-20 09:48:37 +09:00
Noah Misch
e0a39a1d9a Revert "In the pg_upgrade test suite, don't write to src/test/regress."
This reverts commit bd1592e8570282b1650af6b8eede0016496daecd.  It had
multiple defects.

Discussion: https://postgr.es/m/12717.1558304356@sss.pgh.pa.us
2019-05-19 15:24:46 -07:00
Noah Misch
422584caf3 In the pg_upgrade test suite, don't write to src/test/regress.
When this suite runs installcheck, redirect file creations from
src/test/regress to src/bin/pg_upgrade/tmp_check/regress.  This closes a
race condition in "make -j check-world".  If the pg_upgrade suite wrote
to a given src/test/regress/results file in parallel with the regular
src/test/regress invocation writing it, a test failed spuriously.  Even
without parallelism, in "make -k check-world", the suite finishing
second overwrote the other's regression.diffs.  This revealed test
"largeobject" assuming @abs_builddir@ is getcwd(), so fix that, too.

Buildfarm client REL_10, released forty-five days ago, supports saving
regression.diffs from its new location.  When an older client reports a
pg_upgradeCheck failure, it will no longer include regression.diffs.
Back-patch to 9.5, where pg_upgrade moved to src/bin.

Reviewed by Andrew Dunstan.

Discussion: https://postgr.es/m/20181224034411.GA3224776@rfd.leadboat.com
2019-05-19 14:37:23 -07:00
Andres Freund
04595960a0 Add isolation test for INSERT ON CONFLICT speculative insertion failure.
This path previously was not reliably covered. There was some
heuristic coverage via insert-conflict-toast.spec, but that test is
not deterministic, and only tested for a somewhat specific bug.

Backpatch, as this is a complicated and otherwise untested code
path. Unfortunately 9.5 cannot handle two waiting sessions, and thus
cannot execute this test.

Triggered by a conversion with Melanie Plageman.

Author: Andres Freund
Discussion: https://postgr.es/m/CAAKRu_a7hbyrk=wveHYhr4LbcRnRCG=yPUVoQYB9YO1CdUBE9Q@mail.gmail.com
Backpatch: 9.5-
2019-05-14 11:54:06 -07:00
Heikki Linnakangas
d7bf9ad843 Fix comment on when HOT update is possible.
The conditions listed in this comment have changed several times, and at
some point the thing that the "if so" referred to was negated.

The text was OK up to 9.6. It was differently wrong in v10, v11 and
master, so fix in all those versions.
2019-05-14 13:09:28 +03:00
Peter Geoghegan
0569f047e8 Doc: Refer to line pointers as item identifiers.
An upcoming HEAD-only patch will standardize the terminology around
ItemIdData variables/line pointers, ending the practice of referring to
them as "item pointers".  Make the "Database Page Layout" docs
consistent with the new policy.  The term "item identifier" is already
used in the same section, so stick with that.

Discussion: https://postgr.es/m/CAH2-Wz=c=MZQjUzde3o9+2PLAPuHTpVZPPdYxN=E4ndQ2--8ew@mail.gmail.com
Backpatch: All supported branches.
2019-05-13 15:39:03 -07:00
Tom Lane
3b505036a1 Fix logical replication's ideas about which type OIDs are built-in.
Only hand-assigned type OIDs should be presumed to match across different
PG servers; those assigned during genbki.pl or during initdb are likely
to change due to addition or removal of unrelated objects.

This means that the cutoff should be FirstGenbkiObjectId (in HEAD)
or FirstBootstrapObjectId (before that), not FirstNormalObjectId.
Compare postgres_fdw's is_builtin() test.

It's likely that this error has no observable consequence in a
normally-functioning system, since ATM the only affected type OIDs are
system catalog rowtypes and information_schema types, which would not
typically be interesting for logical replication.  But you could
probably break it if you tried hard, so back-patch.

Discussion: https://postgr.es/m/15150.1557257111@sss.pgh.pa.us
2019-05-13 17:23:00 -04:00
Tom Lane
e3bf3c0f8c Fix misuse of an integer as a bool.
pgtls_read_pending is declared to return bool, but what the underlying
SSL_pending function returns is a count of available bytes.

This is actually somewhat harmless if we're using C99 bools, but in
the back branches it's a live bug: if the available-bytes count happened
to be a multiple of 256, it would get converted to a zero char value.
On machines where char is signed, counts of 128 and up could misbehave
as well.  The net effect is that when using SSL, libpq might block
waiting for data even though some has already been received.

Broken by careless refactoring in commit 4e86f1b16, so back-patch
to 9.5 where that came in.

Per bug #15802 from David Binderman.

Discussion: https://postgr.es/m/15802-f0911a97f0346526@postgresql.org
2019-05-13 10:53:19 -04:00
Etsuro Fujita
7c16a2bfdc postgres_fdw: Fix typo in comment. 2019-05-13 17:30:38 +09:00
Tom Lane
940f647925 Fix misoptimization of "{1,1}" quantifiers in regular expressions.
A bounded quantifier with m = n = 1 might be thought a no-op.  But
according to our documentation (which traces back to Henry Spencer's
original man page) it still imposes greediness, or non-greediness in the
case of the non-greedy variant "{1,1}?", on whatever it's attached to.

This turns out not to work though, because parseqatom() optimizes away
the m = n = 1 case without regard for whether it's supposed to change
the greediness of the argument RE.

We can fix this by just not applying the optimization when the greediness
needs to change; the subsequent general cases handle it fine.

The three cases in which we can still apply the optimization are
(a) no quantifier, or quantifier does not impose a preference;
(b) atom has no greediness property, implying it cannot match a
variable amount of text anyway; or
(c) quantifier's greediness is same as atom's.
Note that in most cases where one of these applies, we'd have exited
earlier in the "not a messy case" fast path.  I think it's now only
possible to get to the optimization when the atom involves capturing
parentheses or a non-top-level backref.

Back-patch to all supported branches.  I'd ordinarily be hesitant to
put a subtle behavioral change into back branches, but in this case
it's very hard to see a reason why somebody would write "{1,1}?" unless
they're trying to get the documented change-of-greediness behavior.

Discussion: https://postgr.es/m/5bb27a41-350d-37bf-901e-9d26f5592dd0@charter.net
2019-05-12 18:53:41 -04:00
Noah Misch
409f5303ce Fail pgwin32_message_to_UTF16() for SQL_ASCII messages.
The function had been interpreting SQL_ASCII messages as UTF8, throwing
an error when they were invalid UTF8.  The new behavior is consistent
with pg_do_encoding_conversion().  This affects LOG_DESTINATION_STDERR
and LOG_DESTINATION_EVENTLOG, which will send untranslated bytes to
write() and ReportEventA().  On buildfarm member bowerbird, enabling
log_connections caused an error whenever the role name was not valid
UTF8.  Back-patch to 9.4 (all supported versions).

Discussion: https://postgr.es/m/20190512015615.GD1124997@rfd.leadboat.com
2019-05-12 10:33:08 -07:00
Tom Lane
c3d113136b Rearrange pgstat_bestart() to avoid failures within its critical section.
We long ago decided to design the shared PgBackendStatus data structure to
minimize the cost of writing status updates, which means that writers just
have to increment the st_changecount field twice.  That isn't hooked into
any sort of resource management mechanism, which means that if something
were to throw error between the two increments, the st_changecount field
would be left odd indefinitely.  That would cause readers to lock up.
Now, since it's also a bad idea to leave the field odd for longer than
absolutely necessary (because readers will spin while we have it set),
the expectation was that we'd treat these segments like spinlock critical
sections, with only short, more or less straight-line, code in them.

That was fine as originally designed, but commit 9029f4b37 broke it
by inserting a significant amount of non-straight-line code into
pgstat_bestart(), code that is very capable of throwing errors, not to
mention taking a significant amount of time during which readers will spin.
We have a report from Neeraj Kumar of readers actually locking up, which
I suspect was due to an encoding conversion error in X509_NAME_to_cstring,
though conceivably it was just a garden-variety OOM failure.

Subsequent commits have loaded even more dubious code into pgstat_bestart's
critical section (and commit fc70a4b0d deserves some kind of booby prize
for managing to miss the critical section entirely, although the negative
consequences seem minimal given that the PgBackendStatus entry should be
seen by readers as inactive at that point).

The right way to fix this mess seems to be to compute all these values
into a local copy of the process' PgBackendStatus struct, and then just
copy the data back within the critical section proper.  This plan can't
be implemented completely cleanly because of the struct's heavy reliance
on out-of-line strings, which we must initialize separately within the
critical section.  But still, the critical section is far smaller and
safer than it was before.

In hopes of forestalling future errors of the same ilk, rename the
macros for st_changecount management to make it more apparent that
the writer-side macros create a critical section.  And to prevent
the worst consequences if we nonetheless manage to mess it up anyway,
adjust those macros so that they really are a critical section, ie
they now bump CritSectionCount.  That doesn't add much overhead, and
it guarantees that if we do somehow throw an error while the counter
is odd, it will lead to PANIC and a database restart to reset shared
memory.

Back-patch to 9.5 where the problem was introduced.

In HEAD, also fix an oversight in commit b0b39f72b: it failed to teach
pgstat_read_current_status to copy st_gssstatus data from shared memory to
local memory.  Hence, subsequent use of that data within the transaction
would potentially see changing data that it shouldn't see.

Discussion: https://postgr.es/m/CAPR3Wj5Z17=+eeyrn_ZDG3NQGYgMEOY6JV6Y-WRRhGgwc16U3Q@mail.gmail.com
2019-05-11 21:27:13 -04:00
Noah Misch
7a6a541234 Honor TEMP_CONFIG in TAP suites.
The buildfarm client uses TEMP_CONFIG to implement its extra_config
setting.  Except for stats_temp_directory, extra_config now applies to
TAP suites; extra_config values seen in the past month are compatible
with this.  Back-patch to 9.6, where PostgresNode was introduced, so the
buildfarm can rely on it sooner.

Reviewed by Andrew Dunstan and Tom Lane.

Discussion: https://postgr.es/m/20181229021950.GA3302966@rfd.leadboat.com
2019-05-11 00:23:02 -07:00