1
0
mirror of https://github.com/postgres/postgres.git synced 2025-05-28 05:21:27 +03:00

36821 Commits

Author SHA1 Message Date
Tom Lane
e89867c033 Doc: update URL for check_postgres.
Reported by Dan Vianello.

Discussion: https://postgr.es/m/e6e12f18f70e46848c058084d42fb651@KSTLMEXGP001.CORP.CHARTERCOM.com
2017-11-01 22:07:14 -04:00
Michael Meskes
d64a4d3683 Make sure ecpglib does accepts digits behind decimal point even for integers in
Informix mode.

Spotted and fixed by 高增琦 <pgf00a@gmail.com>
2017-11-01 13:41:21 +01:00
Tom Lane
e06b9e9dc8 Dept of second thoughts: keep aliasp_item in sync with tlistitem.
Commit d5b760ecb wasn't quite right, on second thought: if the
caller didn't ask for column names then it would happily emit
more Vars than if the caller did ask for column names.  This
is surely not a good idea.  Advance the aliasp_item whether or
not we're preparing a colnames list.
2017-10-27 18:16:25 -04:00
Tom Lane
9d15b8b36a Fix crash when columns have been added to the end of a view.
expandRTE() supposed that an RTE_SUBQUERY subquery must have exactly
as many non-junk tlist items as the RTE has column aliases for it.
This was true at the time the code was written, and is still true so
far as parse analysis is concerned --- but when the function is used
during planning, the subquery might have appeared through insertion
of a view that now has more columns than it did when the outer query
was parsed.  This results in a core dump if, for instance, we have
to expand a whole-row Var that references the subquery.

To avoid crashing, we can either stop expanding the RTE when we run
out of aliases, or invent new aliases for the added columns.  While
the latter might be more useful, the former is consistent with what
expandRTE() does for composite-returning functions in the RTE_FUNCTION
case, so it seems like we'd better do it that way.

Per bug #14876 from Samuel Horwitz.  This has been busted since commit
ff1ea2173 allowed views to acquire more columns, so back-patch to all
supported branches.

Discussion: https://postgr.es/m/20171026184035.1471.82810@wrigleys.postgresql.org
2017-10-27 17:10:21 -04:00
Tom Lane
be203c36a6 Rethink the dependencies recorded for FieldSelect/FieldStore nodes.
On closer investigation, commits f3ea3e3e8 et al were a few bricks
shy of a load.  What we need is not so much to lock down the result
type of a FieldSelect, as to lock down the existence of the column
it's trying to extract.  Otherwise, we can break it by dropping that
column.  The dependency on the result type is then held indirectly
through the column, and doesn't need to be recorded explicitly.

Out of paranoia, I left in the code to record a dependency on the
result type, but it's used only if we can't identify the pg_class OID
for the column.  That shouldn't ever happen right now, AFAICS, but
it seems possible that in future the input node could be marked as
being of type RECORD rather than some specific composite type.

Likewise for FieldStore.

Like the previous patch, back-patch to all supported branches.

Discussion: https://postgr.es/m/22571.1509064146@sss.pgh.pa.us
2017-10-27 12:18:57 -04:00
Tom Lane
7102efd9d7 Doc: mention that you can't PREPARE TRANSACTION after NOTIFY.
The NOTIFY page said this already, but the PREPARE TRANSACTION page
missed it.

Discussion: https://postgr.es/m/20171024010602.1488.80066@wrigleys.postgresql.org
2017-10-27 10:46:07 -04:00
Andrew Dunstan
0cf721244a Improve gendef.pl diagnostic on failure to open sym file
There have been numerous buildfarm failures but the diagnostic is
currently silent about the reason for failure to open the file. Let's
see if we can get to the bottom of it.

Backpatch to all live branches.
2017-10-26 10:16:35 -04:00
Tom Lane
6dd7a12075 Fix libpq to not require user's home directory to exist.
Some people like to run libpq-using applications in environments where
there's no home directory.  We've broken that scenario before (cf commits
5b4067798 and bd58d9d88), and commit ba005f193 broke it again, by making
it a hard error if we fail to get the home directory name while looking
for ~/.pgpass.  The previous precedent is that if we can't get the home
directory name, we should just silently act as though the file we hoped
to find there doesn't exist.  Rearrange the new code to honor that.

Looking around, the service-file code added by commit 41a4e4595 had the
same disease.  Apparently, that escaped notice because it only runs when
a service name has been specified, which I guess the people who use this
scenario don't do.  Nonetheless, it's wrong too, so fix that case as well.

Add a comment about this policy to pqGetHomeDirectory, in the probably
vain hope of forestalling the same error in future.  And upgrade the
rather miserable commenting in parseServiceInfo, too.

In passing, also back off parseServiceInfo's assumption that only ENOENT
is an ignorable error from stat() when checking a service file.  We would
need to ignore at least ENOTDIR as well (cf 5b4067798), and seeing that
the far-better-tested code for ~/.pgpass treats all stat() failures alike,
I think this code ought to as well.

Per bug #14872 from Dan Watson.  Back-patch the .pgpass change to v10
where ba005f193 came in.  The service-file bugs are far older, so
back-patch the other changes to all supported branches.

Discussion: https://postgr.es/m/20171025200457.1471.34504@wrigleys.postgresql.org
2017-10-25 19:32:24 -04:00
Tom Lane
da82bb1d8f Update time zone data files to tzdata release 2017c.
DST law changes in Fiji, Namibia, Northern Cyprus, Sudan, Tonga,
and Turks & Caicos Islands.  Historical corrections for Alaska, Apia,
Burma, Calcutta, Detroit, Ireland, Namibia, and Pago Pago.
2017-10-23 18:16:00 -04:00
Tom Lane
9c74dd2d5b Sync our copy of the timezone library with IANA release tzcode2017c.
This is a trivial update containing only cosmetic changes.  The point
is just to get back to being synced with an official release of tzcode,
rather than some ad-hoc point in their commit history, which is where
commit 47f849a3c left it.
2017-10-23 17:54:09 -04:00
Tom Lane
dde99de116 Fix some oversights in expression dependency recording.
find_expr_references() neglected to record a dependency on the result type
of a FieldSelect node, allowing a DROP TYPE to break a view or rule that
contains such an expression.  I think we'd omitted this case intentionally,
reasoning that there would always be a related dependency ensuring that the
DROP would cascade to the view.  But at least with nested field selection
expressions, that's not true, as shown in bug #14867 from Mansur Galiev.
Add the dependency, and for good measure a dependency on the node's exposed
collation.

Likewise add a dependency on the result type of a FieldStore.  I think here
the reasoning was that it'd only appear within an assignment to a field,
and the dependency on the field's column would be enough ... but having
seen this example, I think that's wrong for nested-composites cases.

Looking at nearby code, I notice we're not recording a dependency on the
exposed collation of CoerceViaIO, which seems inconsistent with our choices
for related node types.  Maybe that's OK but I'm feeling suspicious of this
code today, so let's add that; it certainly can't hurt.

This patch does not do anything to protect already-existing views, only
views created after it's installed.  But seeing that the issue has been
there a very long time and nobody noticed till now, that's probably good
enough.

Back-patch to all supported branches.

Discussion: https://postgr.es/m/20171023150118.1477.19174@wrigleys.postgresql.org
2017-10-23 13:57:46 -04:00
Tom Lane
7c70a129ef Fix typcache's failure to treat ranges as container types.
Like the similar logic for arrays and records, it's necessary to examine
the range's subtype to decide whether the range type can support hashing.
We can omit checking the subtype for btree-defined operations, though,
since range subtypes are required to have those operations.  (Possibly
that simplification for btree cases led us to overlook that it does
not apply for hash cases.)

This is only an issue if the subtype lacks hash support, which is not
true of any built-in range type, but it's easy to demonstrate a problem
with a range type over, eg, money: you can get a "could not identify
a hash function" failure when the planner is misled into thinking that
hash join or aggregation would work.

This was born broken, so back-patch to all supported branches.
2017-10-20 17:12:27 -04:00
Tom Lane
06b2a73eda Fix misparsing of non-newline-terminated pg_hba.conf files.
This back-patches the v10-cycle commit 1e5a5d03d into 9.3 - 9.6.
I had noticed at the time that that was fixing a bug, namely that
next_token() might advance *lineptr past the line-terminating '\0',
but given the lack of field complaints I too easily convinced myself
that the problem was only latent.  It's not, because tokenize_file()
decides whether there's more on the line using "strlen(lineptr)".

The bug is indeed latent on a newline-terminated line, because then
the newline-stripping bit in tokenize_file() means we'll have two
or more consecutive '\0's in the buffer, masking the fact that we
accidentally advanced over the first one.  But the last line in
the file might not be null-terminated, allowing the loop to see
and process garbage, as reported by Mark Jones in bug #14859.

The bug doesn't exist in <= 9.2; there next_token() is reading directly
from a file, and termination of the outer loop relies on an feof() test
not a buffer pointer check.  Probably commit 7f49a67f9 can be blamed
for this bug, but I didn't track it down exactly.

Commit 1e5a5d03d does a bit more than the minimum needed to fix the
bug, but I felt the rest of it was good cleanup, so applying it all.

Discussion: https://postgr.es/m/20171017141814.8203.27280@wrigleys.postgresql.org
2017-10-17 12:15:08 -04:00
Tom Lane
f7126b4d65 Doc: fix missing explanation of default object privileges.
The GRANT reference page, which lists the default privileges for new
objects, failed to mention that USAGE is granted by default for data
types and domains.  As a lesser sin, it also did not specify anything
about the initial privileges for sequences, FDWs, foreign servers,
or large objects.  Fix that, and add a comment to acldefault() in the
probably vain hope of getting people to maintain this list in future.

Noted by Laurenz Albe, though I editorialized on the wording a bit.
Back-patch to all supported branches, since they all have this behavior.

Discussion: https://postgr.es/m/1507620895.4152.1.camel@cybertec.at
2017-10-11 16:56:59 -04:00
Tom Lane
7573d122f1 Fix low-probability loss of NOTIFY messages due to XID wraparound.
Up to now async.c has used TransactionIdIsInProgress() to detect whether
a notify message's source transaction is still running.  However, that
function has a quick-exit path that reports that XIDs before RecentXmin
are no longer running.  If a listening backend is doing nothing but
listening, and not running any queries, there is nothing that will advance
its value of RecentXmin.  Once 2 billion transactions elapse, the
RecentXmin check causes active transactions to be reported as not running.
If they aren't committed yet according to CLOG, async.c decides they
aborted and discards their messages.  The timing for that is a bit tight
but it can happen when multiple backends are sending notifies concurrently.
The net symptom therefore is that a sufficiently-long-surviving
listen-only backend starts to miss some fraction of NOTIFY traffic,
but only under heavy load.

The only function that updates RecentXmin is GetSnapshotData().
A brute-force fix would therefore be to take a snapshot before
processing incoming notify messages.  But that would add cycles,
as well as contention for the ProcArrayLock.  We can be smarter:
having taken the snapshot, let's use that to check for running
XIDs, and not call TransactionIdIsInProgress() at all.  In this
way we reduce the number of ProcArrayLock acquisitions from one
per message to one per notify interrupt; that's the same under
light load but should be a benefit under heavy load.  Light testing
says that this change is a wash performance-wise for normal loads.

I looked around for other callers of TransactionIdIsInProgress()
that might be at similar risk, and didn't find any; all of them
are inside transactions that presumably have already taken a
snapshot.

Problem report and diagnosis by Marko Tiikkaja, patch by me.
Back-patch to all supported branches, since it's been like this
since 9.0.

Discussion: https://postgr.es/m/20170926182935.14128.65278@wrigleys.postgresql.org
2017-10-11 14:28:33 -04:00
Tom Lane
d45fb6e9d8 Fix access-off-end-of-array in clog.c.
Sloppy loop coding in set_status_by_pages() resulted in fetching one array
element more than it should from the subxids[] array.  The odds of this
resulting in SIGSEGV are pretty small, but we've certainly seen that happen
with similar mistakes elsewhere.  While at it, we can get rid of an extra
TransactionIdToPage() calculation per loop.

Per report from David Binderman.  Back-patch to all supported branches,
since this code is quite old.

Discussion: https://postgr.es/m/HE1PR0802MB2331CBA919CBFFF0C465EB429C710@HE1PR0802MB2331.eurprd08.prod.outlook.com
2017-10-06 12:20:13 -04:00
Alvaro Herrera
b052d524ca Fix traversal of half-frozen update chains
When some tuple versions in an update chain are frozen due to them being
older than freeze_min_age, the xmax/xmin trail can become broken.  This
breaks HOT (and probably other things).  A subsequent VACUUM can break
things in more serious ways, such as leaving orphan heap-only tuples
whose root HOT redirect items were removed.  This can be seen because
index creation (or REINDEX) complain like
  ERROR:  XX000: failed to find parent tuple for heap-only tuple at (0,7) in table "t"

Because of relfrozenxid contraints, we cannot avoid the freezing of the
early tuples, so we must cope with the results: whenever we see an Xmin
of FrozenTransactionId, consider it a match for whatever the previous
Xmax value was.

This problem seems to have appeared in 9.3 with multixact changes,
though strictly speaking it seems unrelated.

Since 9.4 we have commit 37484ad2a "Change the way we mark tuples as
frozen", so the fix is simple: just compare the raw Xmin (still stored
in the tuple header, since freezing merely set an infomask bit) to the
Xmax.  But in 9.3 we rewrite the Xmin value to FrozenTransactionId, so
the original value is lost and we have nothing to compare the Xmax with.
To cope with that case we need to compare the Xmin with FrozenXid,
assume it's a match, and hope for the best.  Sadly, since you can
pg_upgrade a 9.3 instance containing half-frozen pages to newer
releases, we need to keep the old check in newer versions too, which
seems a bit brittle; I hope we can somehow get rid of that.

I didn't optimize the new function for performance.  The new coding is
probably a bit slower than before, since there is a function call rather
than a straight comparison, but I'd rather have it work correctly than
be fast but wrong.

This is a followup after 20b655224249 fixed a few related problems.
Apparently, in 9.6 and up there are more ways to get into trouble, but
in 9.3 - 9.5 I cannot reproduce a problem anymore with this patch, so
there must be a separate bug.

Reported-by: Peter Geoghegan
Diagnosed-by: Peter Geoghegan, Michael Paquier, Daniel Wood,
	Yi Wen Wong, Álvaro
Discussion: https://postgr.es/m/CAH2-Wznm4rCrhFAiwKPWTpEw2bXDtgROZK7jWWGucXeH3D1fmA@mail.gmail.com
2017-10-06 17:14:42 +02:00
Alvaro Herrera
b24f15f86d Fix coding rules violations in walreceiver.c
1. Since commit b1a9bad9e744 we had pstrdup() inside a
spinlock-protected critical section; reported by Andreas Seltenreich.
Turn those into strlcpy() to stack-allocated variables instead.
Backpatch to 9.6.

2. Since commit 9ed551e0a4fd we had a pfree() uselessly inside a
spinlock-protected critical section.  Tom Lane noticed in code review.
Move down.  Backpatch to 9.6.

3. Since commit 64233902d22b we had GetCurrentTimestamp() (a kernel
call) inside a spinlock-protected critical section.  Tom Lane noticed in
code review.  Move it up.  Backpatch to 9.2.

4. Since commit 1bb2558046cc we did elog(PANIC) while holding spinlock.
Tom Lane noticed in code review.  Release spinlock before dying.
Backpatch to 9.2.

Discussion: https://postgr.es/m/87h8vhtgj2.fsf@ansel.ydns.eu
2017-10-03 14:58:25 +02:00
Alvaro Herrera
d149aa762c Fix freezing of a dead HOT-updated tuple
Vacuum calls page-level HOT prune to remove dead HOT tuples before doing
liveness checks (HeapTupleSatisfiesVacuum) on the remaining tuples.  But
concurrent transaction commit/abort may turn DEAD some of the HOT tuples
that survived the prune, before HeapTupleSatisfiesVacuum tests them.
This happens to activate the code that decides to freeze the tuple ...
which resuscitates it, duplicating data.

(This is especially bad if there's any unique constraints, because those
are now internally violated due to the duplicate entries, though you
won't know until you try to REINDEX or dump/restore the table.)

One possible fix would be to simply skip doing anything to the tuple,
and hope that the next HOT prune would remove it.  But there is a
problem: if the tuple is older than freeze horizon, this would leave an
unfrozen XID behind, and if no HOT prune happens to clean it up before
the containing pg_clog segment is truncated away, it'd later cause an
error when the XID is looked up.

Fix the problem by having the tuple freezing routines cope with the
situation: don't freeze the tuple (and keep it dead).  In the cases that
the XID is older than the freeze age, set the HEAP_XMAX_COMMITTED flag
so that there is no need to look up the XID in pg_clog later on.

An isolation test is included, authored by Michael Paquier, loosely
based on Daniel Wood's original reproducer.  It only tests one
particular scenario, though, not all the possible ways for this problem
to surface; it be good to have a more reliable way to test this more
fully, but it'd require more work.
In message https://postgr.es/m/20170911140103.5akxptyrwgpc25bw@alvherre.pgsql
I outlined another test case (more closely matching Dan Wood's) that
exposed a few more ways for the problem to occur.

Backpatch all the way back to 9.3, where this problem was introduced by
multixact juggling.  In branches 9.3 and 9.4, this includes a backpatch
of commit e5ff9fefcd50 (of 9.5 era), since the original is not
correctable without matching the coding pattern in 9.5 up.

Reported-by: Daniel Wood
Diagnosed-by: Daniel Wood
Reviewed-by: Yi Wen Wong, Michaël Paquier
Discussion: https://postgr.es/m/E5711E62-8FDF-4DCA-A888-C200BF6B5742@amazon.com
2017-09-28 16:44:01 +02:00
Tom Lane
2e82fba0e6 Fix behavior when converting a float infinity to numeric.
float8_numeric() and float4_numeric() failed to consider the possibility
that the input is an IEEE infinity.  The results depended on the
platform-specific behavior of sprintf(): on most platforms you'd get
something like

ERROR:  invalid input syntax for type numeric: "inf"

but at least on Windows it's possible for the conversion to succeed and
deliver a finite value (typically 1), due to a nonstandard output format
from sprintf and lack of syntax error checking in these functions.

Since our numeric type lacks the concept of infinity, a suitable conversion
is impossible; the best thing to do is throw an explicit error before
letting sprintf do its thing.

While at it, let's use snprintf not sprintf.  Overrunning the buffer
should be impossible if sprintf does what it's supposed to, but this
is cheap insurance against a stack smash if it doesn't.

Problem reported by Taiki Kondo.  Patch by me based on fix suggestion
from KaiGai Kohei.  Back-patch to all supported branches.

Discussion: https://postgr.es/m/12A9442FBAE80D4E8953883E0B84E088C8C7A2@BPXM01GP.gisp.nec.co.jp
2017-09-27 17:05:54 -04:00
Noah Misch
43661926de Don't recommend "DROP SCHEMA information_schema CASCADE".
It drops objects outside information_schema that depend on objects
inside information_schema.  For example, it will drop a user-defined
view if the view query refers to information_schema.

Discussion: https://postgr.es/m/20170831025345.GE3963697@rfd.leadboat.com
2017-09-26 22:39:47 -07:00
Tom Lane
6c77b47b2e Improve wording of error message added in commit 714805010.
Per suggestions from Peter Eisentraut and David Johnston.
Back-patch, like the previous commit.

Discussion: https://postgr.es/m/E1dv9jI-0006oT-Fn@gemulon.postgresql.org
2017-09-26 15:25:57 -04:00
Peter Eisentraut
e0f5710c5e Fix saving and restoring umask
In two cases, we set a different umask for some piece of code and
restore it afterwards.  But if the contained code errors out, the umask
is not restored.  So add TRY/CATCH blocks to fix that.
2017-09-23 10:05:40 -04:00
Tom Lane
2020f90bf6 Sync our copy of the timezone library with IANA tzcode master.
This patch absorbs a few unreleased fixes in the IANA code.
It corresponds to commit 2d8b944c1cec0808ac4f7a9ee1a463c28f9cd00a
in https://github.com/eggert/tz.  Non-cosmetic changes include:

TZDEFRULESTRING is updated to match current US DST practice,
rather than what it was over ten years ago.  This only matters
for interpretation of POSIX-style zone names (e.g., "EST5EDT"),
and only if the timezone database doesn't include either an exact
match for the zone name or a "posixrules" entry.  The latter
should not be true in any current Postgres installation, but
this could possibly matter when using --with-system-tzdata.

Get rid of a nonportable use of "++var" on a bool var.
This is part of a larger fix that eliminates some vestigial
support for consecutive leap seconds, and adds checks to
the "zic" compiler that the data files do not specify that.

Remove a couple of ancient compatibility hacks.  The IANA
crew think these are obsolete, and I tend to agree.  But
perhaps our buildfarm will think different.

Back-patch to all supported branches, in line with our policy
that all branches should be using current IANA code.  Before v10,
this includes application of current pgindent rules, to avoid
whitespace problems in future back-patches.

Discussion: https://postgr.es/m/E1dsWhf-0000pT-F9@gemulon.postgresql.org
2017-09-22 00:04:21 -04:00
Tom Lane
a09d8be7dd Give a better error for duplicate entries in VACUUM/ANALYZE column list.
Previously, the code didn't think about this case and would just try to
analyze such a column twice.  That would fail at the point of inserting
the second version of the pg_statistic row, with obscure error messsages
like "duplicate key value violates unique constraint" or "tuple already
updated by self", depending on context and PG version.  We could allow
the case by ignoring duplicate column specifications, but it seems better
to reject it explicitly.

The bogus error messages seem like arguably a bug, so back-patch to
all supported versions.

Nathan Bossart, per a report from Michael Paquier, and whacked
around a bit by me.

Discussion: https://postgr.es/m/E061A8E3-5E3D-494D-94F0-E8A9B312BBFC@amazon.com
2017-09-21 18:13:11 -04:00
Michael Meskes
149cfdb3a2 Fixed ECPG to correctly handle out-of-scope cursor declarations with pointers
or array variables.
2017-09-18 23:08:24 +02:00
Tom Lane
b1be335936 Fix possible dangling pointer dereference in trigger.c.
AfterTriggerEndQuery correctly notes that the query_stack could get
repalloc'd during a trigger firing, but it nonetheless passes the address
of a query_stack entry to afterTriggerInvokeEvents, so that if such a
repalloc occurs, afterTriggerInvokeEvents is already working with an
obsolete dangling pointer while it scans the rest of the events.  Oops.
The only code at risk is its "delete_ok" cleanup code, so we can
prevent unsafe behavior by passing delete_ok = false instead of true.

However, that could have a significant performance penalty, because the
point of passing delete_ok = true is to not have to re-scan possibly
a large number of dead trigger events on the next time through the loop.
There's more than one way to skin that cat, though.  What we can do is
delete all the "chunks" in the event list except the last one, since
we know all events in them must be dead.  Deleting the chunks is work
we'd have had to do later in AfterTriggerEndQuery anyway, and it ends
up saving rescanning of just about the same events we'd have gotten
rid of with delete_ok = true.

In v10 and HEAD, we also have to be careful to mop up any per-table
after_trig_events pointers that would become dangling.  This is slightly
annoying, but I don't think that normal use-cases will traverse this code
path often enough for it to be a performance problem.

It's pretty hard to hit this in practice because of the unlikelihood
of the query_stack getting resized at just the wrong time.  Nonetheless,
it's definitely a live bug of ancient standing, so back-patch to all
supported branches.

Discussion: https://postgr.es/m/2891.1505419542@sss.pgh.pa.us
2017-09-17 14:50:01 -04:00
Tom Lane
a42f8d9792 Fix macro-redefinition warning on MSVC.
In commit 9d6b160d7, I tweaked pg_config.h.win32 to use
"#define HAVE_LONG_LONG_INT_64 1" rather than defining it as empty,
for consistency with what happens in an autoconf'd build.
But Solution.pm injects another definition of that macro into
ecpg_config.h, leading to justifiable (though harmless) compiler whining.
Make that one consistent too.  Back-patch, like the previous patch.

Discussion: https://postgr.es/m/CAEepm=1dWsXROuSbRg8PbKLh0S=8Ou-V8sr05DxmJOF5chBxqQ@mail.gmail.com
2017-09-03 11:01:08 -04:00
Peter Eisentraut
727add80d6 doc: Fix typos and other minor issues
Author: Alexander Lakhin <exclusion@gmail.com>
2017-09-01 23:23:15 -04:00
Tom Lane
dd344de671 Make [U]INT64CONST safe for use in #if conditions.
Instead of using a cast to force the constant to be the right width,
assume we can plaster on an L, UL, LL, or ULL suffix as appropriate.
The old approach to this is very hoary, dating from before we were
willing to require compilers to have working int64 types.

This fix makes the PG_INT64_MIN, PG_INT64_MAX, and PG_UINT64_MAX
constants safe to use in preprocessor conditions, where a cast
doesn't work.  Other symbolic constants that might be defined using
[U]INT64CONST are likewise safer than before.

Also fix the SIZE_MAX macro to be similarly safe, if we are forced
to provide a definition for that.  The test added in commit 2e70d6b5e
happens to do what we want even with the hack "(size_t) -1" definition,
but we could easily get burnt on other tests in future.

Back-patch to all supported branches, like the previous commits.

Discussion: https://postgr.es/m/15883.1504278595@sss.pgh.pa.us
2017-09-01 15:14:18 -04:00
Tom Lane
074985b26a Ensure SIZE_MAX can be used throughout our code.
Pre-C99 platforms may lack <stdint.h> and thereby SIZE_MAX.  We have
a couple of places using the hack "(size_t) -1" as a fallback, but
it wasn't universally available; which means the code added in commit
2e70d6b5e fails to compile everywhere.  Move that hack to c.h so that
we can rely on having SIZE_MAX everywhere.

Per discussion, it'd be a good idea to make the macro's value safe
for use in #if-tests, but that will take a bit more work.  This is
just a quick expedient to get the buildfarm green again.

Back-patch to all supported branches, like the previous commit.

Discussion: https://postgr.es/m/15883.1504278595@sss.pgh.pa.us
2017-09-01 13:52:54 -04:00
Tom Lane
1e6c362606 Doc: document libpq's restriction to INT_MAX rows in a PGresult.
As long as PQntuples, PQgetvalue, etc, use "int" for row numbers, we're
pretty much stuck with this limitation.  The documentation formerly stated
that the result of PQntuples "might overflow on 32-bit operating systems",
which is just nonsense: that's not where the overflow would happen, and
if you did reach an overflow it would not be on a 32-bit machine, because
you'd have OOM'd long since.

Discussion: https://postgr.es/m/CA+FnnTxyLWyjY1goewmJNxC==HQCCF4fKkoCTa9qR36oRAHDPw@mail.gmail.com
2017-08-29 15:38:40 -04:00
Tom Lane
d391fb6c38 Teach libpq to detect integer overflow in the row count of a PGresult.
Adding more than 1 billion rows to a PGresult would overflow its ntups and
tupArrSize fields, leading to client crashes.  It'd be desirable to use
wider fields on 64-bit machines, but because all of libpq's external APIs
use plain "int" for row counters, that's going to be hard to accomplish
without an ABI break.  Given the lack of complaints so far, and the general
pain that would be involved in using such huge PGresults, let's settle for
just preventing the overflow and reporting a useful error message if it
does happen.  Also, for a couple more lines of code we can increase the
threshold of trouble from INT_MAX/2 to INT_MAX rows.

To do that, refactor pqAddTuple() to allow returning an error message that
replaces the default assumption that it failed because of out-of-memory.

Along the way, fix PQsetvalue() so that it reports all failures via
pqInternalNotice().  It already did so in the case of bad field number,
but neglected to report anything for other error causes.

Because of the potential for crashes, this seems like a back-patchable
bug fix, despite the lack of field reports.

Michael Paquier, per a complaint from Igor Korot.

Discussion: https://postgr.es/m/CA+FnnTxyLWyjY1goewmJNxC==HQCCF4fKkoCTa9qR36oRAHDPw@mail.gmail.com
2017-08-29 15:18:01 -04:00
Tom Lane
669bef911c Improve docs about numeric formatting patterns (to_char/to_number).
The explanation about "0" versus "9" format characters was confusing
and arguably wrong; the discussion of sign handling wasn't very good
either.  Notably, while it's accurate to say that "FM" strips leading
zeroes in date/time values, what it really does with numeric values
is to strip *trailing* zeroes, and then only if you wrote "9" rather
than "0".  Per gripes from Erwin Brandstetter.

Discussion: https://postgr.es/m/CAGHENJ7jgRbTn6nf48xNZ=FHgL2WQ4X8mYsUAU57f-vq8PubEw@mail.gmail.com
Discussion: https://postgr.es/m/CAGHENJ45ymd=GOCu1vwV9u7GmCR80_5tW0fP9C_gJKbruGMHvQ@mail.gmail.com
2017-08-29 09:34:21 -04:00
Tom Lane
93067c53ae Stamp 9.3.19. REL9_3_19 2017-08-28 17:28:16 -04:00
Tom Lane
a711d6a31c Doc: adjust release-note credit for parallel pg_restore fix.
Discussion: https://postgr.es/m/CAFcNs+pJ6_Ud-zg3vY_Y0mzfESdM34Humt8avKrAKq_H+v18Cg@mail.gmail.com
2017-08-28 11:40:48 -04:00
Peter Eisentraut
1e5446eac2 Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: ef3340570fcdce8fd9695fa85904448acdb92e06
2017-08-28 10:09:04 -04:00
Tom Lane
bacbfaed1b Release notes for 9.6.5, 9.5.9, 9.4.14, 9.3.19, 9.2.23. 2017-08-27 17:35:04 -04:00
Andres Freund
5b286cae3c Backpatch introduction of TupleDescAttr(tupdesc, i).
2cd70845240 / c6293249d change the way individual attributes in a
TupleDesc are stored / accessed.  To reduce the effort of making
extensions compatible with postgresql 11, and to ease future
backpatching, backpatch introduction of TupleDescAttr() to all
releases.  Do not backpatch change in storage, as that'd be a breaking
change for existing and working extensions.

Author: Andres Freund
Discussion: https://postgr.es/m/20170820181723.tdswdinzptbcwhrr@alap3.anarazel.de
Backpatch: 9.2-
2017-08-22 07:47:50 -07:00
Tom Lane
ece4bd9013 Fix possible core dump in parallel restore when using a TOC list.
Commit 3eb9a5e7c unintentionally introduced an ordering dependency
into restore_toc_entries_prefork().  The existing coding of
reduce_dependencies() contains a check to skip moving a TOC entry
to the ready_list if it wasn't initially in the pending_list.
This used to suffice to prevent reduce_dependencies() from trying to
move anything into the ready_list during restore_toc_entries_prefork(),
because the pending_list stayed empty throughout that phase; but it no
longer does.  The problem doesn't manifest unless the TOC has been
reordered by SortTocFromFile, which is how I missed it in testing.

To fix, just add a test for ready_list == NULL, converting the call
with NULL from a poor man's sanity check into an explicit command
not to touch TOC items' list membership.  Clarify some of the comments
around this; in particular, note the primary purpose of the check for
pending_list membership, which is to ensure that we can't try to restore
the same item twice, in case a TOC list forces it to be restored before
its dependency count goes to zero.

Per report from Fabrízio de Royes Mello.  Back-patch to 9.3, like the
previous commit.

Discussion: https://postgr.es/m/CAFcNs+pjuv0JL_x4+=71TPUPjdLHOXA4YfT32myj_OrrZb4ohA@mail.gmail.com
2017-08-19 13:39:38 -04:00
Tom Lane
bc4404405f Further tweaks to compiler flags for PL/Perl on Windows.
It now emerges that we can only rely on Perl to tell us we must use
-D_USE_32BIT_TIME_T if it's Perl 5.13.4 or later.  For older versions,
revert to our previous practice of assuming we need that symbol in
all 32-bit Windows builds.  This is not ideal, but inquiring into
which compiler version Perl was built with seems far too fragile.
In any case, we had not previously had complaints about these old
Perl versions, so let's assume this is Good Enough.  (It's still
better than the situation ante commit 5a5c2feca, in that at least
the effects are confined to PL/Perl rather than the whole PG build.)

Back-patch to all supported versions, like 5a5c2feca and predecessors.

Discussion: https://postgr.es/m/CANFyU97OVQ3+Mzfmt3MhuUm5NwPU=-FtbNH5Eb7nZL9ua8=rcA@mail.gmail.com
2017-08-17 13:15:36 -04:00
Michael Meskes
f8bc6b2f68 Changed ecpg parser to allow RETURNING clauses without attached C variables. 2017-08-16 13:30:09 +02:00
Peter Eisentraut
9f0f4efc2f Include foreign tables in information_schema.table_privileges
This appears to have been an omission in the original commit
0d692a0dc9f.  All related information_schema views already include
foreign tables.

Reported-by: Nicolas Thauvin <nicolas.thauvin@dalibo.com>
2017-08-15 19:32:52 -04:00
Tom Lane
cd184273ba Handle elog(FATAL) during ROLLBACK more robustly.
Stress testing by Andreas Seltenreich disclosed longstanding problems that
occur if a FATAL exit (e.g. due to receipt of SIGTERM) occurs while we are
trying to execute a ROLLBACK of an already-failed transaction.  In such a
case, xact.c is in TBLOCK_ABORT state, so that AbortOutOfAnyTransaction
would skip AbortTransaction and go straight to CleanupTransaction.  This
led to an assert failure in an assert-enabled build (due to the ROLLBACK's
portal still having a cleanup hook) or without assertions, to a FATAL exit
complaining about "cannot drop active portal".  The latter's not
disastrous, perhaps, but it's messy enough to want to improve it.

We don't really want to run all of AbortTransaction in this code path.
The minimum required to clean up the open portal safely is to do
AtAbort_Memory and AtAbort_Portals.  It seems like a good idea to
do AtAbort_Memory unconditionally, to be entirely sure that we are
starting with a safe CurrentMemoryContext.  That means that if the
main loop in AbortOutOfAnyTransaction does nothing, we need an extra
step at the bottom to restore CurrentMemoryContext = TopMemoryContext,
which I chose to do by invoking AtCleanup_Memory.  This'll result in
calling AtCleanup_Memory twice in many of the paths through this function,
but that seems harmless and reasonably inexpensive.

The original motivation for the assertion in AtCleanup_Portals was that
we wanted to be sure that any user-defined code executed as a consequence
of the cleanup hook runs during AbortTransaction not CleanupTransaction.
That still seems like a valid concern, and now that we've seen one case
of the assertion firing --- which means that exactly that would have
happened in a production build --- let's replace the Assert with a runtime
check.  If we see the cleanup hook still set, we'll emit a WARNING and
just drop the hook unexecuted.

This has been like this a long time, so back-patch to all supported
branches.

Discussion: https://postgr.es/m/877ey7bmun.fsf@ansel.ydns.eu
2017-08-14 15:43:20 -04:00
Tom Lane
25169b948e Absorb -D_USE_32BIT_TIME_T switch from Perl, if relevant.
Commit 3c163a7fc's original choice to ignore all #define symbols whose
names begin with underscore turns out to be too simplistic.  On Windows,
some Perl installations are built with -D_USE_32BIT_TIME_T, and we must
absorb that or we get the wrong result for sizeof(PerlInterpreter).

This effectively re-reverts commit ef58b87df, which injected that symbol
in a hacky way, making it apply to all of Postgres not just PL/Perl.
More significantly, it did so on *all* 32-bit Windows builds, even when
the Perl build to be used did not select this option; so that it fails
to work properly with some newer Perl builds.

By making this change, we would be introducing an ABI break in 32-bit
Windows builds; but fortunately we have not used type time_t in any
exported Postgres APIs in a long time.  So it should be OK, both for
PL/Perl itself and for third-party extensions, if an extension library
is built with a different _USE_32BIT_TIME_T setting than the core code.

Patch by me, based on research by Ashutosh Sharma and Robert Haas.
Back-patch to all supported branches, as commit 3c163a7fc was.

Discussion: https://postgr.es/m/CANFyU97OVQ3+Mzfmt3MhuUm5NwPU=-FtbNH5Eb7nZL9ua8=rcA@mail.gmail.com
2017-08-14 11:48:59 -04:00
Tom Lane
bb11ff2bc8 Remove AtEOXact_CatCache().
The sole useful effect of this function, to check that no catcache
entries have positive refcounts at transaction end, has really been
obsolete since we introduced ResourceOwners in PG 8.1.  We reduced the
checks to assertions years ago, so that the function was a complete
no-op in production builds.  There have been previous discussions about
removing it entirely, but consensus up to now was that it had some small
value as a cross-check for bugs in the ResourceOwner logic.

However, it now emerges that it's possible to trigger these assertions
if you hit an assert-enabled backend with SIGTERM during a call to
SearchCatCacheList, because that function temporarily increases the
refcounts of entries it's intending to add to a catcache list construct.
In a normal ERROR scenario, the extra refcounts are cleaned up by
SearchCatCacheList's PG_CATCH block; but in a FATAL exit we do a
transaction abort and exit without ever executing PG_CATCH handlers.

There's a case to be made that this is a generic hazard and we should
consider restructuring elog(FATAL) handling so that pending PG_CATCH
handlers do get run.  That's pretty scary though: it could easily create
more problems than it solves.  Preliminary stress testing by Andreas
Seltenreich suggests that there are not many live problems of this ilk,
so we rejected that idea.

There are more-localized ways to fix the problem; the most principled
one would be to use PG_ENSURE_ERROR_CLEANUP instead of plain PG_TRY.
But adding cycles to SearchCatCacheList isn't very appealing.  We could
also weaken the assertions in AtEOXact_CatCache in some more or less
ad-hoc way, but that just makes its raison d'etre even less compelling.
In the end, the most reasonable solution seems to be to just remove
AtEOXact_CatCache altogether, on the grounds that it's not worth trying
to fix it.  It hasn't found any bugs for us in many years.

Per report from Jeevan Chalke.  Back-patch to all supported branches.

Discussion: https://postgr.es/m/CAM2+6=VEE30YtRQCZX7_sCFsEpoUkFBV1gZazL70fqLn8rcvBA@mail.gmail.com
2017-08-13 16:15:14 -04:00
Tom Lane
06931a9c0e Fix handling of container types in find_composite_type_dependencies.
find_composite_type_dependencies correctly found columns that are of
the specified type, and columns that are of arrays of that type, but
not columns that are domains or ranges over the given type, its array
type, etc.  The most general way to handle this seems to be to assume
that any type that is directly dependent on the specified type can be
treated as a container type, and processed recursively (allowing us
to handle nested cases such as ranges over domains over arrays ...).
Since a type's array type already has such a dependency, we can drop
the existing special case for the array type.

The very similar logic in get_rels_with_domain was likewise a few
bricks shy of a load, as it supposed that a directly dependent type
could *only* be a sub-domain.  This is already wrong for ranges over
domains, and it'll someday be wrong for arrays over domains.

Add test cases illustrating the problems, and back-patch to all
supported branches.

Discussion: https://postgr.es/m/15268.1502309024@sss.pgh.pa.us
2017-08-09 17:03:10 -04:00
Alvaro Herrera
962b26b674 Reword some unclear comments 2017-08-08 18:49:23 -04:00
Tom Lane
a5915db2fa Stamp 9.3.18. REL9_3_18 2017-08-07 17:17:46 -04:00
Peter Eisentraut
61f52850f5 Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 0858deb5ec238f269d1311ff1299f003fc168cab
2017-08-07 13:51:07 -04:00