This fixes problems where a plan must change but fails to do so,
as seen in a bug report from Rajkumar Raghuwanshi.
For ALTER FOREIGN TABLE OPTIONS, do this through the standard method of
forcing a relcache flush on the table. For ALTER FOREIGN DATA WRAPPER
and ALTER SERVER, just flush the whole plan cache on any change in
pg_foreign_data_wrapper or pg_foreign_server. That matches the way
we handle some other low-probability cases such as opclass changes, and
it's unclear that the case arises often enough to be worth working harder.
Besides, that gives a patch that is simple enough to back-patch with
confidence.
Back-patch to 9.3. In principle we could apply the code change to 9.2 as
well, but (a) we lack postgres_fdw to test it with, (b) it's doubtful that
anyone is doing anything exciting enough with FDWs that far back to need
this desperately, and (c) the patch doesn't apply cleanly.
Patch originally by Amit Langote, reviewed by Etsuro Fujita and Ashutosh
Bapat, who each contributed substantial changes as well.
Discussion: https://postgr.es/m/CAKcux6m5cA6rRPTKkqVdJ-R=KKDfe35Q_ZuUqxDSV_4hwga=og@mail.gmail.com
This commit purported to use a variable hash seed for Partial
HashAggregate, but actually did the opposite - it made us use a
variable seed for any HashAggregate that is NOT partial. Woops.
Commit 4aec49899e5782247e134f94ce1c6ee926f88e1c reorganized the order
of operations here so that we no longer increment the number of "extra
waits" before locking the semaphore, but it did not change the
starting value of extraWaits from 0 to -1 to compensate. In the worst
case, this could leak a semaphore count, but that seems to be unlikely
in practice.
Discussion: http://postgr.es/m/CAA4eK1JyVqXiMba+-a589Rk0pyHsyKkGxeumVKjU6Y74hdrVLQ@mail.gmail.com
Amit Kapila, per an off-list report by Dilip Kumar. Reviewed by me.
With the old code, a backend that read pg_stat_activity without ever
having executed a parallel query might see a backend in the midst of
executing one waiting on a DSA LWLock, resulting in a crash. The
solution is for backends to register the tranche at startup time, not
the first time a parallel query is executed.
Report by Andreas Seltenreich. Patch by me, reviewed by Thomas Munro.
array_fill(..., array[0]) produced an empty array, which is probably
what users expect, but it was a one-dimensional zero-length array
which is not our standard representation of empty arrays. Also, for
no very good reason, it rejected empty input arrays; that case should
be allowed and produce an empty output array.
In passing, remove the restriction that the input array(s) have lower
bound 1. That seems rather pointless, and it would have needed extra
complexity to make the check deal with empty input arrays.
Per bug #14487 from Andrew Gierth. It's been broken all along, so
back-patch to all supported branches.
Discussion: https://postgr.es/m/20170105152156.10135.64195@wrigleys.postgresql.org
Inheritance operations must treat the OID column, if any, much like
regular user columns. But MergeAttributesIntoExisting() neglected to
do that, leading to weird results after a table with OIDs is associated
to a parent with OIDs via ALTER TABLE ... INHERIT.
Report and patch by Amit Langote, reviewed by Ashutosh Bapat, some
adjustments by me. It's been broken all along, so back-patch to
all supported branches.
Discussion: https://postgr.es/m/cb13cfe7-a48c-5720-c383-bb843ab28298@lab.ntt.co.jp
Be more clear that we represent timestamps in microseconds when
integer timestamps are used, and in fractional seconds when
floating-point timestamps are used.
Discussion: http://postgr.es/m/20161212135045.GB15488@e733.localdomain
Report by Alexander Alekseev. Wording by me with a suggestion
from Tom Lane.
RelationGetPartitionQual() and generate_partition_qual() are always
called with recurse = true, so we don't need an argument for that.
Extracted by me from a larger patch by Amit Langote.
After a tuple is routed to a partition, it has been converted from the
root table's row type to the partition's row type. ExecConstraints
needs to report the failure using the original tuple and the parent's
tuple descriptor rather than the ones for the selected partition.
Amit Langote
Allow pg_recvlogical to specify an ending LSN, complementing
the existing -—startpos=LSN option.
Craig Ringer, reviewed by Euler Taveira and Naoki Okano
configure can only probe the existence of gcc intrinsics, not how well
they're implemented, and unfortunately the answer is sometimes "badly".
In particular we've found that multiple compilers fail to implement
char-width __sync_lock_test_and_set() correctly on PPC; and even a correct
implementation would necessarily be pretty inefficient, since that hardware
has only a word-wide primitive to work with.
Given the knowledge we've accumulated in s_lock.h, it appears that it's
best to rely on int-width TAS operations on most non-Intel architectures.
Hence, pick int not char when both are nominally available to us in
generic-gcc.h (note that that code is not used for x86[_64]).
Back-patch to fix regression test failures on FreeBSD/PPC. Ordinarily
back-patching a change like this would be verboten because of ABI breakage.
But since pg_atomic_flag is not yet used in any Postgres data structure,
there's no ABI to break. It seems safer to back-patch to avoid possible
gotchas, if someday we do back-patch something that uses pg_atomic_flag.
Discussion: https://postgr.es/m/25414.1483076673@sss.pgh.pa.us
Commit 2ac3ef7a01df859c62d0a02333b646d65eaec5ff added a TupleTapleSlot
for partition tuple slot to EState (es_partition_tuple_slot) but it's
more logical to have it as part of ModifyTableState
(mt_partition_tuple_slot) and CopyState (partition_tuple_slot).
Discussion: http://postgr.es/m/1bd459d9-4c0c-197a-346e-e5e59e217d97@lab.ntt.co.jp
Amit Langote, per a gripe from me
Leave OpenSSL's default passphrase collection callback in place during
the first call of secure_initialize() in server startup. Although that
doesn't work terribly well in daemon contexts, some people feel we should
not break it for anyone who was successfully using it before. We still
block passphrase demands during SIGHUP, meaning that you can't adjust SSL
configuration on-the-fly if you used a passphrase, but this is no worse
than what it was before commit de41869b6. And we block passphrase demands
during EXEC_BACKEND reloads; that behavior wasn't useful either, but at
least now it's documented.
Tweak some related log messages for more readability, and avoid issuing
essentially duplicate messages about reload failure caused by a passphrase.
Discussion: https://postgr.es/m/29982.1483412575@sss.pgh.pa.us
The typical size of an LWLock is now 16 bytes even on 64-bit platforms,
and the size of slock_t is now irrelevant. But pg_atomic_uint32 can
(perhaps surprisingly) still be larger than 4 bytes, so there's still
some marginal point to allowing LWLOCK_MINIMAL_SIZE == 64.
Commit 008608b9d51061b1f598c197477b3dc7be9c4a64 made the changes
that led to the need for these updates.
Add methods to the core test framework PostgresNode.pm to allow us to
test that standby nodes have caught up with the master, as well as
basic LSN handling. Used in tests recovery/t/001_stream_rep.pl and
recovery/t/004_timeline_switch.pl
Craig Ringer, reviewed by Aleksander Alekseev and Simon Riggs
The purpose of the test was to check access to the sequence relation on
a hot standby, so change the test to read a different column from the
sequence, instead of just reading the catalog.
From: Andreas Karlsson <andreas@proxel.se>
These files are deleted but not yet gone from the filesystem. Operations
on them will return ERROR_DELETE_PENDING.
With this we start treating that as ENOENT, meaning files does not
exist (which is the state it will soon reach). This should be safe in
every case except when we try to recreate a file with exactly the same
name. This is an operation that PostgreSQL does very seldom, so
hopefully that won't happen much -- and even if it does, this treatment
should be no worse than treating it as an unhandled error.
We've been un able to reproduce the bug reliably, so pushing this to
master to get buildfarm coverage and other testing. Once it's proven to
be stable, it should be considered for backpatching.
Discussion: https://postgr.es/m/20160712083220.1426.58667%40wrigleys.postgresql.org
Patch by me and Michael Paquier
Since streaming is now supported for all output formats, make this the
default as this is what most people want.
To get the old behavior, the parameter -X none can be specified to turn
it off.
This also removes the parameter -x for fetch, now requiring -X fetch to
be specified to use that.
Reviewed by Vladimir Rusinov, Michael Paquier and Simon Riggs
OpenSSL's default behavior when loading a passphrase-protected key file
is to open /dev/tty and demand the password from there. It was kinda
sorta okay to allow that to happen at server start, but really that was
never workable in standard daemon environments. And it was a complete
fail on Windows, where the same thing would happen at every backend launch.
Yesterday's commit de41869b6 put the final nail in the coffin by causing
that to happen at every SIGHUP; even if you've still got a terminal acting
as the server's TTY, having the postmaster freeze until you enter the
passphrase again isn't acceptable.
Hence, override the default behavior with a callback that returns an empty
string, ensuring failure. Change the documentation to say that you can't
have a passphrase-protected server key, period.
If we can think of a production-grade way of collecting a passphrase from
somewhere, we might do that once at server startup and use this callback
to feed it to OpenSSL, but it's far from clear that anyone cares enough
to invest that much work in the feature. The lack of complaints about
the existing fractionally-baked behavior suggests nobody's using it anyway.
Discussion: https://postgr.es/m/29982.1483412575@sss.pgh.pa.us
That was written when we still had "crypt" authentication, and it was
referring to the fact that an older client might support "crypt"
authentication but not "md5". But we haven't supported "crypt" for years.
(As soon as we add a new authentication mechanism that doesn't work with
MD5 hashes, we'll need a similar notice again. But this text as it's worded
now is just wrong.)
Backpatch to all supported versions.
Discussion: https://www.postgresql.org/message-id/9a7263eb-0980-2072-4424-440bb2513dc7@iki.fi
It is no longer necessary to restart the server to enable, disable,
or reconfigure SSL. Instead, we just create a new SSL_CTX struct
(by re-reading all relevant files) whenever we get SIGHUP. Testing
shows that this is fast enough that it shouldn't be a problem.
In conjunction with that, downgrade the logic that complains about
pg_hba.conf "hostssl" lines when SSL isn't active: now that's just
a warning condition not an error.
An issue that still needs to be addressed is what shall we do with
passphrase-protected server keys? As this stands, the server would
demand the passphrase again on every SIGHUP, which is certainly
impractical. But the case was only barely supported before, so that
does not seem a sufficient reason to hold up committing this patch.
Andreas Karlsson, reviewed by Michael Banck and Michael Paquier
Discussion: https://postgr.es/m/556A6E8A.9030400@proxel.se
The advantage of clock_gettime() is that the API allows the result to
be precise to nanoseconds, not just microseconds as in gettimeofday().
Now that it's routinely possible to do tens of plan node executions
in 1us, we really need more precision than gettimeofday() can offer
for EXPLAIN ANALYZE to accumulate statistics with.
Some research shows that clock_gettime() is available on pretty nearly
every modern Unix-ish platform, and as far as I have been able to test,
it has about the same execution time as gettimeofday(), so there's no
loss in switching over. (By the same token, this doesn't do anything
to fix the fact that we really wish clock readings were faster. But
there's enough win here to justify changing anyway.)
A small side benefit is that on most platforms, we can use CLOCK_MONOTONIC
instead of CLOCK_REALTIME and thereby render EXPLAIN impervious to
concurrent resets of the system clock. (This means that code must not
assume that the contents of struct instr_time have any well-defined
interpretation as timestamps, but really that was true before.)
Some platforms offer nonstandard clock IDs that might be of interest.
This patch knows we should use CLOCK_MONOTONIC_RAW on macOS, because it
provides more precision and is faster to read than their CLOCK_MONOTONIC.
If there turn out to be many more cases where we need special rules, it
might be appropriate to handle the selection of clock ID in configure,
but for the moment that doesn't seem worth the trouble.
Discussion: https://postgr.es/m/31856.1400021891@sss.pgh.pa.us
For aggregated logging, pg_bench supposed that printing the integer part of
INSTR_TIME_GET_DOUBLE() would produce a Unix timestamp. That was already
broken on Windows, and it's about to get broken on most other platforms as
well. As in commit 74baa1e3b, we can remove the entanglement at the price
of one extra syscall per transaction; though here it seems more convenient
to use time(NULL) instead of gettimeofday(), since we only need
integral-second precision.
I took the time to do some wordsmithing on the documentation about
pgbench's logging features, too.
Discussion: https://postgr.es/m/8837.1483216839@sss.pgh.pa.us
This code was presuming undue familiarity with the contents of the
instr_time struct. That was already broken on Windows, and it's about
to get broken on most other platforms as well. The simplest solution
that preserves the current output definition is to just do our own
gettimeofday() call here. Realistically, the extra cost is probably
negligible in comparison to everything else that's going on in a
pgbench transaction, so it's not worth sweating over.
On Windows, the precision delivered by gettimeofday() is lower than
one could wish, but this is still a big improvement over printing
zeroes, as the code did before.
Discussion: https://postgr.es/m/8837.1483216839@sss.pgh.pa.us
Commit 2ac3ef7a0 added a query with an underdetermined output row order;
it has failed multiple times in the buildfarm since then. Add an ORDER BY
to fix. Also, don't rely on a DROP CASCADE to drop in a well-determined
order; that hasn't failed yet but I don't trust it much, and we're not
saving any typing by using CASCADE anyway.
Commit f0e44751d added new node tags at a place in the tag numbering
where there was no daylight left before the next hard-coded number,
resulting in some duplicate tag assignments. This doesn't seem to have
caused any big problem so far, but it's surely trouble waiting to happen.
We could adjust the manually assigned breakpoints to make more room,
but that just leaves the same hazard waiting to strike again in future.
What seems like a better idea is to get rid of the manual assignments
and leave NodeTags to be automatically assigned, consecutively from one
on up. This means that any change in the tag list forces a backend-wide
recompile, but realistically that's usually needed anyway.
Discussion: https://postgr.es/m/29670.1482942811@sss.pgh.pa.us
Most code was casting this through a generic Node. By declaring
everything as RoleSpec appropriately, we can remove a bunch of casts and
ad-hoc node type checking.
Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com>
interval_transform() contained two separate bugs that caused it to
sometimes mistakenly decide that a cast from interval to restricted
interval is a no-op and throw it away.
First, it was wrong to rely on dt.h's field type macros to have an
ordering consistent with the field's significance; in one case they do
not. This led to mistakenly treating YEAR as less significant than MONTH,
so that a cast from INTERVAL MONTH to INTERVAL YEAR was incorrectly
discarded.
Second, fls(1<<k) produces k+1 not k, so comparing its output directly
to SECOND was wrong. This led to supposing that a cast to INTERVAL
MINUTE was really a cast to INTERVAL SECOND and so could be discarded.
To fix, get rid of the use of fls(), and make a function based on
intervaltypmodout to produce a field ID code adapted to the need here.
Per bug #14479 from Piotr Stefaniak. Back-patch to 9.2 where transform
functions were introduced, because this code was born broken.
Discussion: https://postgr.es/m/20161227172307.10135.7747@wrigleys.postgresql.org
In addition to space accounted for by tuple_len, dead_tuple_len and
free_space, the table_len includes page overhead, the item pointers
table and padding bytes.
Backpatch to live branches.
In 56c7d8d the behavior to keep .partial segments around
(considered corrupt) in case an connection failure occurs was
accidentally removed. This would lead to an incomplete segment
being considered complete.
Author: Michael Paquier
hashname() asserted that the key string it is given is shorter than
NAMEDATALEN. That should surely always be true if the input is in fact a
regular value of type "name". However, for reasons of coding convenience,
we allow plain old C strings to be treated as "name" values in many places.
Some SQL functions accept arbitrary "text" inputs, convert them to C
strings, and pass them otherwise-untransformed to syscache lookups for name
columns, allowing an overlength input value to trigger hashname's Assert.
This would be a DOS problem, except that it only happens in assert-enabled
builds which aren't recommended for production. In a production build,
you'll just get a name lookup error, since regardless of the hash value
computed by hashname, the later equality comparison checks can't match.
Likewise, if the catalog lookup is done by seqscan or indexscan searches,
there will just be a lookup error, since the name comparison functions
don't contain any similar length checks, and will see an overlength input
as unequal to any stored entry.
After discussion we concluded that we should simply remove this Assert.
It's inessential to hashname's own functionality, and having such an
assertion in only some paths for name lookup is more of a foot-gun than
a useful check. There may or may not be a case for the affected callers
to do something other than let the name lookup fail, but we'll consider
that separately; in any case we probably don't want to change such
behavior in the back branches.
Per report from Tushar Ahuja. Back-patch to all supported branches.
Report: https://postgr.es/m/7d0809ee-6f25-c9d6-8e74-5b2967830d49@enterprisedb.com
Discussion: https://postgr.es/m/17691.1482523168@sss.pgh.pa.us
\crosstabview's complaint about multiple entries for the same crosstab
cell quoted the wrong row and/or column values. It would accidentally
appear to work if the data had been in strcmp() order to start with,
which probably explains how we missed noticing this during development.
This could be fixed in more than one way, but the way I chose was to
hang onto both result pointers from bsearch() and use those to get at
the value names.
In passing, avoid casting away const in the bsearch comparison functions.
No bug there, just poor style.
Per bug #14476 from Tomonari Katsumata. Back-patch to 9.6 where
\crosstabview was introduced.
Report: https://postgr.es/m/20161225021519.10139.45460@wrigleys.postgresql.org
The -v/--verbose option was not included in the output from --help for
pg_dumpall even though it's in the pg_dumpall documentation and has
apparently been around since pg_dumpall was reimplemented in C in 2002.
Fix that by adding it.
Pointed out by Daniel Westermann.
Back-patch to all supported branches.
Discussion: https://www.postgresql.org/message-id/2020970042.4589542.1482482101585.JavaMail.zimbra%40dbi-services.com
When providing tab completion for ALTER DEFAULT PRIVILEGES, we are
including the list of roles as possible options for completion after the
GRANT or REVOKE. Further, we accept FOR ROLE/IN SCHEMA at the same time
and in either order, but the tab completion was only working for one or
the other. Lastly, we weren't using the actual list of allowed kinds of
objects for default privileges for completion after the 'GRANT X ON' but
instead were completeing to what 'GRANT X ON' supports, which isn't the
ssame at all.
Address these issues by improving the forward tab-completion for ALTER
DEFAULT PRIVILEGES and then constrain and correct how the tail
completion is done when it is for ALTER DEFAULT PRIVILEGES.
Back-patch the forward/tail tab-completion to 9.6, where we made it easy
to handle such cases.
For 9.5 and earlier, correct the initial tab-completion to at least be
correct as far as it goes and then add a check for GRANT/REVOKE to only
tab-complete when the GRANT/REVOKE is the start of the command, so we
don't try to do tab-completion after we get to the GRANT/REVOKE part of
the ALTER DEFAULT PRIVILEGES command, which is better than providing
incorrect completions.
Initial patch for master and 9.6 by Gilles Darold, though I cleaned it
up and added a few comments. All bugs in the 9.5 and earlier patch are
mine.
Discussion: https://www.postgresql.org/message-id/1614593c-e356-5b27-6dba-66320a9bc68b@dalibo.com
Now that it has only INH_NO and INH_YES values, it's just weird that
it's not a plain bool, so make it that way.
Also rename RangeVar.inhOpt to "inh", to be like RangeTblEntry.inh.
My recollection is that we gave it a different name specifically because
it had a different representation than the derived bool value, but it
no longer does. And this is a good forcing function to be sure we
catch any places that are affected by the change.
Bump catversion because of possible effect on stored RangeVar nodes.
I'm not exactly convinced that we ever store RangeVar on disk, but
we have a readfuncs function for it, so be cautious. (If we do do so,
then commit e13486eba was in error not to bump catversion.)
Follow-on to commit e13486eba.
Discussion: http://postgr.es/m/CA+TgmoYe+EG7LdYX6pkcNxr4ygkP4+A=jm9o-CPXyOvRiCNwaQ@mail.gmail.com
We had an index entry for "median" attached to the percentile_cont function
entry, which was pretty useless because a person following the link would
never realize that that function was the one they were being hinted to use.
Instead, make the index entry point at the example in syntax-aggregates,
and add a <seealso> link to "percentile".
Also, since that example explicitly claims to be calculating the median,
make it use percentile_cont not percentile_disc. This makes no difference
in terms of the larger goals of that section, but so far as I can find,
nearly everyone thinks that "median" means the continuous not discrete
calculation.
Per gripe from Steven Winfield. Back-patch to 9.4 where we introduced
percentile_cont.
Discussion: https://postgr.es/m/20161223102056.25614.1166@wrigleys.postgresql.org
I got a little annoyed by reading documentation paragraphs containing
both spellings within a few lines of each other. My dictionary says
"descendant" is the preferred spelling, and it's certainly the majority
usage in our tree, so standardize on that.
For one usage in parallel.sgml, I thought it better to rewrite to avoid
the term altogether.