In v11 or before, recovery target settings could not take effect in
crash recovery because they are specified in recovery.conf and
crash recovery always starts without recovery.conf. But commit
2dedf4d9a8 integrated recovery.conf into postgresql.conf and
which unexpectedly allowed recovery target settings to take effect
even in crash recovery. This is definitely not good behavior.
To fix the issue, this commit makes crash recovery always ignore
recovery target settings.
Back-patch to v12.
Author: Peter Eisentraut
Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/e445616d-023e-a268-8aa1-67b8b335340c@pgmasters.net
In the course of 5567d12ce0, 356687bd8 and 317ffdfeaa, I changed
BuildTupleHashTable[Ext]'s call to ExecBuildGroupingEqual to not pass
in the parent node, but NULL. Which in turn prevents the tuple
equality comparator from being JIT compiled. While that fixes
bug #15486, it is not actually necessary after all of the above commits,
as we don't re-build the comparator when using the new
BuildTupleHashTableExt() interface (as the content of the hashtable
are reset, but the TupleHashTable itself is not).
Therefore re-allow jit compilation for callers that use
BuildTupleHashTableExt with a separate context for "metadata" and
content.
As in the previous commit, there's ongoing work to make this easier to
test to prevent such regressions in the future, but that
infrastructure is not going to be backpatchable.
The performance impact of not JIT compiling hashtable equality
comparators can be substantial e.g. for aggregation queries that
aggregate a lot of input rows to few output rows (when there are a lot
of output groups, there will be fewer comparisons).
Author: Andres Freund
Discussion: https://postgr.es/m/20190927072053.njf6prdl3vb7y7qb@alap3.anarazel.de
Backpatch: 11, just as 5567d12ce0
For many queries the fact that the tuple descriptor from the lower
node was not taken into account when determining whether the type of a
slot is fixed, lead to tuple deforming for such upper nodes not to be
JIT accelerated.
I broke this in 675af5c01e.
There is ongoing work to enable writing regression tests for related
behavior (including a patch that would have detected this
regression), by optionally showing such details in EXPLAIN. But as it
seems unlikely that that will be suitable for stable branches, just
merge the fix for now.
While it's fairly close to the 12 release window, the fact that 11
continues to perform JITed tuple deforming in these cases, that
there's still cases where we do so in 12, and the fact that the
performance regression can be sizable, weigh in favor of fixing it
now.
Author: Andres Freund
Discussion: https://postgr.es/m/20190927072053.njf6prdl3vb7y7qb@alap3.anarazel.de
Backpatch: 12-, where 675af5c01e was merged.
Windows does not enforce key file permissions checks in libpq, and psql
can produce CRLF line endings on Windows.
Backpatch to Release 12 (CRLF) and Release 11 (permissions check)
Coverity pointed out that it's pretty silly to check for a null pointer
after we've already dereferenced the pointer. To fix, just swap the
order of the two error checks. Oversight in commit d6e612f83.
This test already knew that, to get stable test output, it had to hide
"loops" counts in EXPLAIN ANALYZE results. But that's not nearly enough:
if we get a smaller number of workers than we planned for, then the
"Workers Launched" number will change, and so will all the rows and loops
counts up to the Gather node. This has resulted in repeated failures in
the buildfarm, so adjust the test to filter out all these counts.
(Really, we wouldn't bother with EXPLAIN ANALYZE at all here, except
that currently the only way to verify that executor-time pruning has
happened is to look for '(never executed)' annotations. Those are
stable and needn't be filtered out.)
Back-patch to v11 where the test was introduced.
Discussion: https://postgr.es/m/11952.1569536725@sss.pgh.pa.us
HEAD supports OpenSSL 0.9.8 and newer versions, and this code likely got
forgotten as its surrounding comments mention an incorrect version
number.
Author: Michael Paquier
Reviewed-by: Peter Eisentraut
Discussion: https://postgr.es/m/20190927032311.GB8485@paquier.xyz
For the most part, we leave libpq-controlling environment variables
alone during "make installcheck", reasoning that connecting to the
server the user expects us to connect to may depend on those variables.
But that argument doesn't apply to PGDATABASE, since we always want
to connect to a specific database name within the server. And failing
to unset it causes certain ECPG tests to fail, as various people have
complained of in the past. So let's unset it.
Possibly this should be back-patched, but I'm disinclined to do that
right before 12.0 release. Maybe later.
Discussion: https://postgr.es/m/20180318205548.2akxjqvo7hrk5wbc@alap3.anarazel.de
Discussion: https://postgr.es/m/E1bOum4-0002EA-2y@gemulon.postgresql.org
If we don't do this, the rewind fails if the server wasn't cleanly shut
down, which seems unhelpful serving no purpose.
Also provide a new option --no-ensure-shutdown to suppress this
behavior, for alleged advanced usage that prefers to avoid the crash
recovery.
Authors: Paul Guo, Jimmy Yih, Ashwin Agrawal
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/CAEET0ZEffUkXc48pg2iqARQgGRYDiiVxDu+yYek_bTwJF+q=Uw@mail.gmail.com
For some reason at least gcc-9 warns about the fallthrough, even
though it otherwise recognizes that elog(ERROR, ...) doesn't return.
Author: Andres Freund
We've noted certain EXPLAIN queries on these tables occasionally showing
unexpected plan choices. This seems to happen because VACUUM sometimes
fails to update relpages/reltuples for one of these single-page tables,
due to bgwriter or checkpointer holding a pin on the lone page at just
the wrong time. To ensure those values get set, insert explicit ANALYZE
operations on these tables after we finish populating them. This
doesn't seem to affect any other test cases, so it's a usable fix.
Back-patch to v12. In principle the issue exists further back, but
we have not seen it before v12, so I won't risk back-patching further.
Discussion: https://postgr.es/m/24480.1569518042@sss.pgh.pa.us
This removes the last of the temporary debugging queries added to the
regression tests by commit f03a9ca43. We've pretty much convinced
ourselves that the plan instability we were seeing is due to VACUUM
sometimes failing to update relpages/reltuples for a single-page table,
due to bgwriter or checkpointer holding a pin on that page at just the
wrong time. I'll push a workaround for that separately.
Discussion: https://postgr.es/m/CA+hUKG+0CxrKRWRMf5ymN3gm+BECHna2B-q1w8onKBep4HasUw@mail.gmail.com
The test name and the following test cases suggest the index created
should be hash index, but it forgot to add 'using hash' in the test case.
This in itself won't improve code coverage as there were some other tests
which were covering the corresponding code. However, it is better if the
added tests serve their actual purpose.
Reported-by: Paul A Jungwirth
Author: Paul A Jungwirth
Reviewed-by: Mahendra Singh
Backpatch-through: 9.4
Discussion: https://postgr.es/m/CA+renyV=Us-5XfMC25bNp-uWSj39XgHHmGE9Rh2cQKMegSj52g@mail.gmail.com
The code was enforcing AccessExclusiveLock for all custom relation
options, which is incorrect as the APIs allow a custom lock level to be
set.
While on it, fix a couple of inconsistencies in the tests and the README
of dummy_index_am.
Oversights in commit 773df88.
Discussion: https://postgr.es/m/20190925234152.GA2115@paquier.xyz
LIKE INCLUDING DEFAULTS tried to copy the attrdef expression without
copying the state of the attgenerated column. This is in fact wrong,
because GENERATED and DEFAULT expressions are not the same kind of animal;
one can contain Vars and the other not. We *must* copy attgenerated
when we're copying the attrdef expression. Rearrange the if-tests
so that the expression is copied only when the correct one of
INCLUDING DEFAULTS and INCLUDING GENERATED has been specified.
Per private report from Manuel Rigger.
Tom Lane and Peter Eisentraut
This commit implements jsonpath .datetime() method as it's specified in
SQL/JSON standard. There are no-argument and single-argument versions of
this method. No-argument version selects first of ISO datetime formats
matching input string. Single-argument version accepts template string as
its argument.
Additionally to .datetime() method itself this commit also implements
comparison ability of resulting date and time values. There is some difficulty
because exising jsonb_path_*() functions are immutable, while comparison of
timezoned and non-timezoned types involves current timezone. At first, current
timezone could be changes in session. Moreover, timezones themselves are not
immutable and could be updated. This is why we let existing immutable functions
throw errors on such non-immutable comparison. In the same time this commit
provides jsonb_path_*_tz() functions which are stable and support operations
involving timezones. As new functions are added to the system catalog,
catversion is bumped.
Support of .datetime() method was the only blocker prevents T832 from being
marked as supported. sql_features.txt is updated correspondingly.
Extracted from original patch by Nikita Glukhov, Teodor Sigaev, Oleg Bartunov.
Heavily revised by me. Comments were adjusted by Liudmila Mantrova.
Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com
Discussion: https://postgr.es/m/CAPpHfdsZgYEra_PeCLGNoXOWYx6iU-S3wF8aX0ObQUcZU%2B4XTw%40mail.gmail.com
Author: Alexander Korotkov, Nikita Glukhov, Teodor Sigaev, Oleg Bartunov, Liudmila Mantrova
Reviewed-by: Anastasia Lubennikova, Peter Eisentraut
SQL/JSON standard allows manipulation with datetime values. So, it appears to
be convinient to allow datetime values to be represented in JsonbValue struct.
These datetime values are allowed for temporary representation only. During
serialization datetime values are converted into strings.
SQL/JSON requires writing timestamps with timezone in the same timezone offset
as they were parsed. This is why we allow storage of timezone offset in
JsonbValue struct. For the same reason timezone offset argument is added to
JsonEncodeDateTime() function.
Extracted from original patch by Nikita Glukhov, Teodor Sigaev, Oleg Bartunov.
Revised by me. Comments were adjusted by Liudmila Mantrova.
Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com
Discussion: https://postgr.es/m/CAPpHfdsZgYEra_PeCLGNoXOWYx6iU-S3wF8aX0ObQUcZU%2B4XTw%40mail.gmail.com
Author: Nikita Glukhov, Teodor Sigaev, Oleg Bartunov, Alexander Korotkov, Liudmila Mantrova
Reviewed-by: Anastasia Lubennikova, Peter Eisentraut
Add support of error suppression in some date and time manipulation functions
as it's required for jsonpath .datetime() method support. This commit doesn't
use PG_TRY()/PG_CATCH() in order to implement that. Instead, it provides
internal versions of date and time functions used, which support error
suppression.
Discussion: https://postgr.es/m/CAPpHfdsZgYEra_PeCLGNoXOWYx6iU-S3wF8aX0ObQUcZU%2B4XTw%40mail.gmail.com
Author: Alexander Korotkov, Nikita Glukhov
Reviewed-by: Anastasia Lubennikova, Peter Eisentraut
This commit adds parse_datetime() function, which implements datetime
parsing with extended features demanded by upcoming jsonpath .datetime()
method:
* Dynamic type identification based on template string,
* Support for standard-conforming 'strict' mode,
* Timezone offset is returned as separate value.
Extracted from original patch by Nikita Glukhov, Teodor Sigaev, Oleg Bartunov.
Revised by me.
Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com
Discussion: https://postgr.es/m/CAPpHfdsZgYEra_PeCLGNoXOWYx6iU-S3wF8aX0ObQUcZU%2B4XTw%40mail.gmail.com
Author: Nikita Glukhov, Teodor Sigaev, Oleg Bartunov, Alexander Korotkov
Reviewed-by: Anastasia Lubennikova, Peter Eisentraut
SQL Standard 2016 defines rules for handling separators in datetime template
strings, which are different to to_date()/to_timestamp() rules. Standard
allows only small set of separators and requires strict matching for them.
Standard applies to jsonpath .datetime() method and CAST (... FORMAT ...) SQL
clause. We're not going to change handling of separators in existing
to_date()/to_timestamp() functions, because their current behavior is familiar
for users. Standard behavior now available by special flag, which will be used
in upcoming .datetime() jsonpath method.
Discussion: https://postgr.es/m/CAPpHfdsZgYEra_PeCLGNoXOWYx6iU-S3wF8aX0ObQUcZU%2B4XTw%40mail.gmail.com
Author: Alexander Korotkov
All our current in core relation options of type string (not many,
admittedly) behave in reality like enums. But after seeing an
implementation for enum reloptions, it's clear that strings are messier,
so introduce the new reloption type. Switch all string options to be
enums instead.
Fortunately we have a recently introduced test module for reloptions, so
we don't lose coverage of string reloptions, which may still be used by
third-party modules.
Authors: Nikolay Shaplov, Álvaro Herrera
Reviewed-by: Nikita Glukhov, Aleksandr Parfenov
Discussion: https://postgr.es/m/43332102.S2V5pIjXRx@x200m
Several buildfarm members (crake, loach and spurfowl) are complaining
about two queries looking up at pg_class.reloptions which trigger the
validation routines for string reloptions with default values. This
commit limits the routines to be triggered only when building an index
with all custom options set in CREATE INDEX, which is sufficient for the
coverage.
Introduced by 640c198.
This includes more tests dedicated to relation options, bringing the
coverage of this code close to 100%, and the module can be used for
other purposes, like a base template for an index AM implementation.
Author: Nikolay Sharplov, Michael Paquier
Reviewed-by: Álvaro Herrera, Dent John
Discussion: https://postgr.es/m/17071942.m9zZutALE6@x200m
Relation options can define a lock mode other than AccessExclusiveMode
since 47167b7, but modules defining custom relation options did not
really have a way to enforce that. Correct that by extending the
current API set so as modules can define a custom lock mode.
Author: Michael Paquier
Reviewed-by: Kuntal Ghosh
Discussion: https://postgr.es/m/20190920013831.GD1844@paquier.xyz
In-core relation options can use a custom lock mode since 47167b7, that
has lowered the lock available for some autovacuum parameters. However
it forgot to consider custom relation options. This causes failures
with ALTER TABLE SET when changing a custom relation option, as its lock
is not defined. The existing APIs to define a custom reloption does not
allow to define a custom lock mode, so enforce its initialization to
AccessExclusiveMode which should be safe enough in all cases. An
upcoming patch will extend the existing APIs to allow a custom lock mode
to be defined.
The problem can be reproduced with bloom indexes, so add a test there.
Reported-by: Nikolay Sharplov
Analyzed-by: Thomas Munro, Michael Paquier
Author: Michael Paquier
Reviewed-by: Kuntal Ghosh
Discussion: https://postgr.es/m/20190920013831.GD1844@paquier.xyz
Backpatch-through: 9.6
The state-tracking of WAL reading in various places was pretty messy,
mostly because the ancient physical-replication WAL reading code wasn't
using the XLogReader abstraction. This led to some untidy code. Make
it prettier by creating two additional supporting structs,
WALSegmentContext and WALOpenSegment which keep track of WAL-reading
state. This makes code cleaner, as well as supports more future
cleanup.
Author: Antonin Houska
Reviewed-by: Álvaro Herrera and (older versions) Robert Haas
Discussion: https://postgr.es/m/14984.1554998742@spoje.net
Fix an oversight in commit 7266d0997: as it stood, the code failed
when a function-in-FROM returns composite and can be simplified
to a composite constant.
For the moment, just test for composite result and abandon pullup
if we see one. To make it actually work, we'd have to decompose
the composite constant into per-column constants; which is surely
do-able, but I'm not convinced it's worth the code space.
Per report from Raúl Marín Rodríguez.
Discussion: https://postgr.es/m/CAM6_UM4isP+buRA5sWodO_MUEgutms-KDfnkwGmryc5DGj9XuQ@mail.gmail.com
When a relation is truncated, shared_buffers needs to be scanned
so that any buffers for the relation forks are invalidated in it.
Previously, shared_buffers was scanned for each relation forks, i.e.,
MAIN, FSM and VM, when VACUUM truncated off any empty pages
at the end of relation or TRUNCATE truncated the relation in place.
Since shared_buffers needed to be scanned multiple times,
it could take a long time to finish those commands especially
when shared_buffers was large.
This commit changes the logic so that shared_buffers is scanned only
one time for those three relation forks.
Author: Kirk Jamison
Reviewed-by: Masahiko Sawada, Thomas Munro, Alvaro Herrera, Takayuki Tsunakawa and Fujii Masao
Discussion: https://postgr.es/m/D09B13F772D2274BB348A310EE3027C64E2067@g01jpexmbkw24
If the bitstring length is not a multiple of 8, we'd shift the
rightmost bits into the pad space, which must be zeroes --- bit_cmp,
for one, depends on that. This'd lead to the result failing to
compare equal to what it should compare equal to, as reported in
bug #16013 from Daryl Waycott.
This is, if memory serves, not the first such bug in the bitstring
functions. In hopes of making it the last one, do a bit more work
than minimally necessary to fix the bug:
* Add assertion checks to bit_out() and varbit_out() to complain if
they are given incorrectly-padded input. This will improve the
odds that manual testing of any new patch finds problems.
* Encapsulate the padding-related logic in macros to make it
easier to use.
Also, remove unnecessary padding logic from bit_or() and bitxor().
Somebody had already noted that we need not re-pad the result of
bit_and() since the inputs are required to be the same length,
but failed to extrapolate that to the other two.
Also, move a comment block that once was near the head of varbit.c
(but people kept putting other stuff in front of it), to put it in
the header block.
Note for the release notes: if anyone has inconsistent data as a
result of saving the output of bitshiftright() in a table, it's
possible to fix it with something like
UPDATE mytab SET bitcol = ~(~bitcol) WHERE bitcol != ~(~bitcol);
This has been broken since day one, so back-patch to all supported
branches.
Discussion: https://postgr.es/m/16013-c2765b6996aacae9@postgresql.org
The code used the destination slot's natts where it intended to
use the source slot's natts. Adding an Assert shows that there
is no case in "make check-world" where these counts are different,
so maybe this is a harmless bug, but it's still a bug.
Takayuki Tsunakawa
Discussion: https://postgr.es/m/0A3221C70F24FB45833433255569204D1FD34C0E@G01JPEXMBYT05
Move the responsibility for advancing the NOTIFY queue tail pointer
from the listener(s) to the notification sender, and only have the
sender do it once every few queue pages, rather than after every batch
of notifications as at present. This reduces the number of times we
execute asyncQueueAdvanceTail, and reduces contention when there are
multiple listeners (since that function requires exclusive lock).
This change relies on the observation that we don't really need the tail
pointer to be exactly up-to-date. It's certainly not necessary to
attempt to release disk space more often than once per SLRU segment.
The only other usage of the tail pointer is that an incoming listener,
if it's the only listener in its database, will need to scan the queue
forward from the tail; but that's surely a less performance-critical
path than routine sending and receiving of notifies. We compromise by
advancing the tail pointer after every 4 pages of output, so that it
shouldn't get more than a few pages behind.
Also, when sending signals to other backends after adding notify
message(s) to the queue, recognize that only backends in our own
database are going to care about those messages, so only such
backends really need to be awakened promptly. Backends in other
databases should get kicked if they're well behind on reading the
queue, else they'll hold back the global tail pointer; but wakening
them for every single message is pointless. This change can
substantially reduce signal traffic if listeners are spread among
many databases. It won't help for the common case of only a single
active database, but the extra check costs very little.
Martijn van Oosterhout, with some adjustments by me
Discussion: https://postgr.es/m/CADWG95vtRBFDdrx1JdT1_9nhOFw48KaeTev6F_LtDQAFVpSPhA@mail.gmail.com
Discussion: https://postgr.es/m/CADWG95uFj8rLM52Er80JnhRsTbb_AqPP1ANHS8XQRGbqLrU+jA@mail.gmail.com
Since we introduced the idea of leakproof functions, texteq and textne
were marked leakproof but their sibling text comparison functions were
not. This inconsistency seemed justified because texteq/textne just
relied on memcmp() and so could easily be seen to be leakproof, while
the other comparison functions are far more complex and indeed can
throw input-dependent errors.
However, that argument crashed and burned with the addition of
nondeterministic collations, because now texteq/textne may invoke
the exact same varstr_cmp() infrastructure as the rest. It makes no
sense whatever to give them different leakproofness markings.
After a certain amount of angst we've concluded that it's all right
to consider varstr_cmp() to be leakproof, mostly because the other
choice would be disastrous for performance of many queries where
leakproofness matters. The input-dependent errors should only be
reachable for corrupt input data, or so we hope anyway; certainly,
if they are reachable in practice, we've got problems with requirements
as basic as maintaining a btree index on a text column.
Hence, run around to all the SQL functions that derive from varstr_cmp()
and mark them leakproof. This should result in a useful gain in
flexibility/performance for queries in which non-leakproofness degrades
the efficiency of the query plan.
Back-patch to v12 where nondeterministic collations were added.
While this isn't an essential bug fix given the determination
that varstr_cmp() is leakproof, we might as well apply it now that
we've been forced into a post-beta4 catversion bump.
Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
text_pattern_ops and its siblings can't be used with nondeterministic
collations, because they use the text_eq operator which will not behave
as bitwise equality if applied with a nondeterministic collation. The
initial implementation of that restriction was to insert a run-time test
in the related comparison functions, but that is inefficient, may throw
misleading errors, and will throw errors in some cases that would work.
It seems sufficient to just prevent the combination during CREATE INDEX,
so do that instead.
Lacking any better way to identify the opclasses involved, we need to
hard-wire tests for them, which requires hand-assigned values for their
OIDs, which forces a catversion bump because they previously had OIDs
that would be assigned automatically. That's slightly annoying in the
v12 branch, but fortunately we're not at rc1 yet, so just do it.
Back-patch to v12 where nondeterministic collations were added.
In passing, run make reformat-dat-files, which found some unrelated
whitespace issues (slightly different ones in HEAD and v12).
Peter Eisentraut, with small corrections by me
Discussion: https://postgr.es/m/22566.1568675619@sss.pgh.pa.us
DST law changes in Fiji and Norfolk Island. Historical corrections
for Alberta, Austria, Belgium, British Columbia, Cambodia, Hong Kong,
Indiana (Perry County), Kaliningrad, Kentucky, Michigan, Norfolk
Island, South Korea, and Turkey.
The new function stashes its output value in a JsonbValue that can be
passed in by the caller, which enables some of them to pass
stack-allocated structs -- saving palloc cycles. It also allows some
callers that know they are handling a jsonb object to use this new jsonb
object-specific API, instead of going through generic container
findJsonbValueFromContainer.
Author: Nikita Glukhov
Discussion: https://postgr.es/m/7c417f90-f95f-247e-ba63-d95e39c0ad14@postgrespro.ru
Instead of creating an iterator object at each step down the JSONB
object/array, we can just just examine its object/array flags, which is
faster. Also, use the recently introduced JsonbValueAsText instead of
open-coding the same thing, for code simplicity.
Author: Nikita Glukhov
Discussion: https://postgr.es/m/7c417f90-f95f-247e-ba63-d95e39c0ad14@postgrespro.ru