In commit 4dc6355210 I neglected to put #ifdef USE_OPENSSL around the
declarations of the new items. This is remedied here.
Per complaint from Daniel Gustafsson.
recordExtObjInitPriv and removeExtObjInitPriv were sloppy about
calling ReleaseSysCache. The cases cannot occur given current usage
in ALTER EXTENSION ADD/DROP, since we wouldn't get here for these
relkinds; but it seems wise to clean up better.
In passing, extend test logic in test_pg_dump to exercise the
dropped-column code paths here.
Since the case is unreachable at present, there seems no great
need to back-patch; hence fix HEAD only.
Kyotaro Horiguchi, with test case and comment adjustments by me
Discussion: https://postgr.es/m/20200417.151831.1153577605111650154.horikyota.ntt@gmail.com
The result of the query used to retrieve the WAL segment size from the
backend was not getting freed in two code paths. Both pg_basebackup and
pg_receivewal exit immediately if a failure happened on this query, so
this was not an actual problem, but it could be an issue if this code
gets used for other tools in different ways, be they future tools in
this code tree or external, existing, ones.
Oversight in commit fc49e24, so backpatch down to 11.
Author: Jie Zhang
Discussion: https://postgr.es/m/970ad9508461469b9450b64027842331@G08CNEXMBPEKD06.g08.fujitsu.local
Backpatch-through: 11
It was previously thought that remove_useless_groupby_columns() needed to
keep track of which constraints the generated plan depended upon, however,
this is unnecessary. The confusion likely arose regarding this because of
check_functional_grouping(), which does need to track the dependency to
ensure VIEWs with columns which are functionally dependant on the GROUP BY
remain so. For remove_useless_groupby_columns(), cached plans will just
become invalidated when the primary key's underlying index is removed
through the normal relcache invalidation code.
Here we just remove the unneeded code which records the dependency and
updates the comments. The previous comments claimed that we could not use
UNIQUE constraints for the same optimization due to lack of a
pg_constraint record for NOT NULL constraints (which are required because
NULLs can be duplicated in a unique index). Since we don't actually need a
pg_constraint record to handle the invalidation, it looks like we could
add code to do this in the future. But not today.
We're not really fixing any bug in the code here, this fix is just to set
the record straight on UNIQUE constraints. This code was added back in
9.6, but due to lack of any bug, we'll not be backpatching this.
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAApHDvrdYa=VhOoMe4ZZjZ-G4ALnD-xuAeUNCRTL+PYMVN8OnQ@mail.gmail.com
Earlier we were inconsistent in allowing the usage of parallel and
full options. Change it such that we disallow them only when they are
combined in a way that we don't support.
In passing, improve the comments in some of the existing tests of parallel
vacuum.
Reported-by: Tushar Ahuja
Author: Justin Pryzby, Amit Kapila
Reviewed-by: Sawada Masahiko, Michael Paquier, Mahendra Singh Thalor and
Amit Kapila
Discussion: https://postgr.es/m/58c8d171-e665-6fa3-a9d3-d9423b694dae%40enterprisedb.com
Since 0d8c9c1, pg_basebackup would generate an error if connected to a
backend version older than 12 where backup manifests are not supported.
Avoiding this error is possible by using the --no-manifest option.
This error handling could be confusing for some users, where patching a
backup script that interacts with multiple backend versions would cause
the addition of --no-manifest to potentially not generate a backup
manifest even for Postgres 13 and newer versions. As we want to
encourage the use of backup manifests as much as possible, this commit
silently disables manifests where not supported, instead of generating
an error.
While on it, rework a bit the code to make it more consistent with the
surroundings when generating the BASE_BACKUP command.
Per discussion with Andres Freund, Stephen Frost, Robert Haas, Álvaro
Herrera, Kyotaro Horiguchi, Tom Lane, David Steele, and me.
Author: Michael Paquier
Discussion: https://postgr.es/m/20200410080910.GZ1606@paquier.xyz
This commit prevents pg_basebackup from receiving backup_manifest file
when --no-manifest is specified. Previously, when pg_basebackup was
writing a tarfile to stdout, it tried to receive backup_manifest file even
when --no-manifest was specified, and reported an error.
Also remove unused -m option from pg_basebackup.
Also fix typo in BASE_BACKUP command documentation.
Author: Fujii Masao
Reviewed-by: Michael Paquier, Robert Haas
Discussion: https://postgr.es/m/01e3ed3a-8729-5aaa-ca84-e60e3ca59db8@oss.nttdata.com
A comment from the Berkeley days incorrectly claimed that the page
management code cares about the contents of the hole in the center of
the page (at least in the case of the left half of an nbtree page
split). Commit 8fa30f906be added an addendum that stated that the
original comment was "probably obsolete". It's definitely obsolete,
though, so remove the original comment plus the addendum.
Back in commits 1df92eeafe, f884a96819, and 592123efbb I used some
hackish code to set the script search path, unaware despite decades of
perl that there was a completely standard way to do this. This patch
changes those cases to use the standard perl FindBin package.
Nest the "update metapage as part of insert into root-like page" branch
inside the broader "insert into internal page" branch. This improves
readability.
Clearly it's not okay for nbtree to split a page that is the only page
on its level, and then find that it has to split the parent one level up
in turn. There is simply no code to handle the split_only_page case in
the _bt_insertonpg() "newitem won't fit" branch (only the "newitem fits"
branch handles split_only_page). Add a defensive assertion that will
fail if a split_only_page call to _bt_insertonpg() somehow ends up
splitting the target/parent page.
I (pgeoghegan) believe that we don't need split_only_page handling for
the "newitem won't fit" branch because anybody calling _bt_insertonpg()
like this would have to hold a lock on the same one and only child page.
It seems like a good idea for nbtree's retail insert code to be
absolutely consistent with nbtree's page split code for anything that
naturally requires equivalent handling. Anything that concerns
inserting newitem (which is handled as part of the page split atomic
action when a page split is required) should work in exactly the same
way. With that in mind, make _bt_insertonpg() handle 'cbuf' in a way
that matches _bt_split().
An nbtree split point can be thought of as a point between two adjoining
tuples from an imaginary version of the page being split that includes
the incoming/new item (in addition to the items that really are on the
page). These adjoining tuples are called the lastleft and firstright
tuples.
The variables that represent split points contained a field called
firstright, which is an offset number of the first data item from the
original page that goes on the new right page. The corresponding tuple
from origpage was usually the same thing as the actual firstright tuple,
but not always: the firstright tuple is sometimes the new/incoming item
instead. This situation seems unnecessarily confusing.
Make things clearer by renaming the origpage offset returned by
_bt_findsplitloc() to "firstrightoff". We now have a firstright tuple
and a firstrightoff offset number which are comparable to the
newitem/lastleft tuples and the newitemoff/lastleftoff offset numbers
respectively. Also make sure that we are consistent about how we
describe nbtree page split point state.
Push the responsibility for dealing with pg_upgrade'd !heapkeyspace
indexes down to lower level code, relieving _bt_split() from dealing
with it directly. This means that we always have a palloc'd left page
high key on the leaf level, no matter what. This enables simplifying
some of the code (and code comments) within _bt_split().
Finally, restructure the page split code to make it clearer why suffix
truncation (which only takes place during leaf page splits) is
completely different to the first data item truncation that takes place
during internal page splits. Tuples are marked as having fewer
attributes stored in both cases, and the firstright tuple is truncated
in both cases, so it's easy to imagine somebody missing the distinction.
This replaces a few occurrences of ugly code with a more clean and
idiomatic usage. The problem was highlighted by perlcritic, but we're
not enforcing the policy that led to the discovery.
Discussion: https://postgr.es/m/20200412074245.GB623763@rfd.leadboat.com
We've had a mixture of the warnings pragma, the -w switch on the shebang
line, and no warnings at all. This patch removes the -w swicth and add
the warnings pragma to all perl sources missing it. It raises the
severity of the TestingAndDebugging::RequireUseWarnings perlcritic
policy to level 5, so that we catch any future violations.
Discussion: https://postgr.es/m/20200412074245.GB623763@rfd.leadboat.com
This makes it easier to do a web search for details of the policy that's
been violated, as well as displaying the name that might be needed for a
policy override.
Various perlcritic settings changes are being discussed, but this one
should be uncontroversial.
The placement of the fall-through comment in this code appears not to
work to suppress the warning in recent gcc. Move it to the bottom of
the case group, and add an assertion that we didn't get there through
some other code path. Also improve wording of nearby comments.
Julien Rouhaud, comment hacking by me
Discussion: https://postgr.es/m/CAOBaU_aLdPGU5wCpaowNLF-Q8328iR7mj1yJAhMOVsdLwY+sdg@mail.gmail.com
Specifically, remember lookup_type_cache() results instead of retrieving
them once per comparison. Under CLOBBER_CACHE_ALWAYS, this reduced
src/test/subscription/t/001_rep_changes.pl elapsed time by an order of
magnitude, which reduced check-world elapsed time by 9%.
Discussion: https://postgr.es/m/20200406085420.GC162712@rfd.leadboat.com
Before sleeping, WalSndWaitForWal() sends a keepalive if MyWalSnd->write
< sentPtr. That is important in logical replication. When the latest
physical LSN yields no logical replication messages (a common case),
that keepalive elicits a reply, and processing the reply updates
pg_stat_replication.replay_lsn. WalSndLoop() lacks that; when
WalSndLoop() slept, replay_lsn advancement could stall until
wal_receiver_status_interval elapsed. This sometimes stalled
src/test/subscription/t/001_rep_changes.pl for up to 10s.
Discussion: https://postgr.es/m/20200406063649.GA3738151@rfd.leadboat.com
Before discarding the old hash table in ExecReScanHashJoin, capture
its statistics, ensuring that we report the maximum hashtable size
across repeated rescans of the hash input relation. We can repurpose
the existing code for reporting hashtable size in parallel workers
to help with this, making the patch pretty small. This also ensures
that if rescans happen within parallel workers, we get the correct
maximums across all instances.
Konstantin Knizhnik and Tom Lane, per diagnosis by Thomas Munro
of a trouble report from Alvaro Herrera.
Discussion: https://postgr.es/m/20200323165059.GA24950@alvherre.pgsql
ExecReScanHashJoin will destroy the join's hash table if it expects
that the inner relation will produce different rows on rescan.
Up to now it's not bothered to clear the additional pointer to that
hash table that exists in the child HashState node. However, it's
possible for the query to terminate without building a fresh hash
table (this happens if the outer relation is found to be empty
during the final rescan). So we can end with a dangling pointer
to a deleted hash table. That was harmless originally, but since
9.0 EXPLAIN ANALYZE has used that pointer to print hash table
statistics. In debug builds this reproducibly results in garbage
statistics. In non-debug builds there's frequently no ill effects,
but in principle one could get wrong EXPLAIN ANALYZE output, or
perhaps even a crash if free() has released the hashtable memory
back to the OS.
To fix, just make sure we clear the additional pointer when destroying
the hash table. In problematic cases, EXPLAIN ANALYZE will then print
no hashtable statistics (reverting to its pre-9.0 behavior). This isn't
ideal, but since the problem manifests only in unusual corner cases,
it's hard to justify taking any risks to do better in the back
branches. A follow-on patch will improve matters in HEAD.
Konstantin Knizhnik and Tom Lane, per diagnosis by Thomas Munro
of a trouble report from Alvaro Herrera.
Discussion: https://postgr.es/m/20200323165059.GA24950@alvherre.pgsql
Depending on specific values of restart_lsn or pg_current_wal_lsn()
is obviously unsafe. The previous coding tried to dodge this issue
by rounding off, but that's not good enough, as shown by multiple
buildfarm members. Nuke all the uses of these values except for
null-ness checks, pending some credible argument why we should think
something else could be usefully stable.
Kyotaro Horiguchi, further modified by me
Discussion: https://postgr.es/m/E1jM1Sa-0003mS-99@gemulon.postgresql.org
Suppress a probably-meaningless uninitialized-variable warning
(induced by my previous patch, I'm sorry to say).
Improve mark_hl_fragments()'s test for overlapping cover strings:
it failed to consider the possibility that the current string is
strictly within another one. That's unlikely given the preceding
splitting into MaxWords fragments, but I don't think it's impossible.
Discussion: https://postgr.es/m/16345-2e0cf5cddbdcd3b4@postgresql.org
This code could produce very poor results when asked to highlight a
string based on a query using phrase-match operators. The root cause
is that hlCover(), which is supposed to find a minimal substring that
matches the query, was written assuming that word position is not
significant. I'm only 95% convinced that its algorithm was correct even
for plain AND/OR queries; but it definitely fails completely for phrase
matches, causing it to possibly not identify a cover string at all.
Hence, rewrite hlCover() with a less-tense algorithm that just tries
all the possible substrings, earlier and shorter ones first. (This is
not as bad as it sounds performance-wise, because all of the string
matching has been done already: the repeated tsquery match checks
boil down to pointer comparisons.)
Unfortunately, since that approach produces more candidate cover
strings than before, it also exposes that there were bugs in the
heuristics in mark_hl_words() for selecting a best cover string.
Fixes there include:
* Do not apply the ShortWord filter to words that appear in the query.
* Remove a misguided optimization for quickly rejecting a cover.
* Fix order-of-operation bug that could cause computation of a
wrong figure of merit (poslen) when shortening a cover.
* Change the preference rule so that candidate headlines that do not
include their whole cover string (after MaxWords trimming) are lowest
priority, since they may not actually satisfy the user's query.
This results in some changes in existing regression test cases,
but they all seem reasonable. Note in particular that the tests
involving strings like "1 2 3" were previously being affected by
the ShortWord filter, masking the normal matching behavior.
Per bug #16345 from Augustinas Jokubauskas; the new test cases are
based on that example. Back-patch to 9.6 where phrase search was
added to tsquery.
Discussion: https://postgr.es/m/16345-2e0cf5cddbdcd3b4@postgresql.org
This code was woefully unreadable and under-commented. Try to improve
matters by adding comments, using some macros to make complicated
if-tests more readable, using boolean type where appropriate, etc.
There are a couple of tiny coding improvements too, but this commit
includes (I hope) no behavioral change.
Nonetheless, back-patch as far as 9.6, because a followup bug-fixing
commit depends on this.
Discussion: https://postgr.es/m/16345-2e0cf5cddbdcd3b4@postgresql.org
CREATE TABLE LIKE INCLUDING GENERATED would fail if a generated column
referred to a column with a higher attribute number. This is because
the column mapping mechanism created the mapping incrementally as
columns are added. This was sufficient for previous uses of that
mechanism (omitting dropped columns), and it also happened to work if
generated columns only referred to columns with lower attribute
numbers, but here it failed.
This fix is to build the attribute mapping in a separate loop before
processing the columns in detail.
Bug: #16342
Reported-by: Ethan Waldo <ewaldo@healthetechs.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
If there is already a backup_manifest file in the database cluster,
it belongs to the past backup that was used to start this server.
It is not correct for the backup being taken now. So this commit
changes pg_basebackup so that it always skips such backup_manifest
file. The backup_manifest file for the current backup will be injected
separately if users want it.
Author: Fujii Masao
Reviewed-by: Robert Haas
Discussion: https://postgr.es/m/78f76a3d-1a28-a97d-0394-5c96985dd1c0@oss.nttdata.com
Currently, we don't account for buffer usage incurred by parallel workers
for parallel create index. This commit allows each worker to record the
buffer usage stats and leader backend to accumulate that stats at the
end of the operation. This will allow pg_stat_statements to display
correct buffer usage stats for (parallel) create index command.
Reported-by: Julien Rouhaud
Author: Sawada Masahiko
Reviewed-by: Dilip Kumar, Julien Rouhaud and Amit Kapila
Backpatch-through: 11, where this was introduced
Discussion: https://postgr.es/m/20200328151721.GB12854@nol
The test never did ANALYZE on the test table, so the plans depended on
various defaults (e.g. number of groups being 200). This worked most of
the time, but with CLOBBER_CACHE_ALWAYS the autoanalyze often managed
to build accurate stats, changing the plan.
Fixed by increasing the size of test tables a bit, making the Sort a bit
more expensive than Incremental Sort. The tests were constructed to test
transitions in the Incremental Sort algorithm, and this change does not
break that.
Reviewed-by: James Coleman
Discussion: https://postgr.es/m/CAPpHfds1waRZ=NOmueYq0sx1ZSCnt+5QJvizT8ndT2=etZEeAQ@mail.gmail.com
Repair an oversight in commit 8728b2c70: if we're postponing restore
of event triggers to the end, we must also postpone restoring any
comments on them, since of course we cannot create the comments first.
(This opens yet another opportunity for an event trigger to bollix
the restore, but there's no help for that.)
Per bug #16346 from Alexander Lakhin.
Like the previous commit, back-patch to all supported branches.
Hamid Akhtar and Tom Lane
Discussion: https://postgr.es/m/16346-6210ad7a0ea81be1@postgresql.org
GetWalRcvWriteRecPtr() previously reported the latest *flushed*
location. Adopt the conventional terminology used elsewhere in the tree
by renaming it to GetWalRcvFlushRecPtr(), and likewise for some related
variables that used the term "received".
Add a new definition of GetWalRcvWriteRecPtr(), which returns the latest
*written* value. This will allow later patches to use the value for
non-data-integrity purposes, without having to wait for the flush
pointer to advance.
Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CA%2BhUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq%3DAovOddfHpA%40mail.gmail.com
To control whether partition changes are replicated using their own
identity and schema or an ancestor's, add a new parameter that can be
set per publication named 'publish_via_partition_root'.
This allows replicating a partitioned table into a different partition
structure on the subscriber.
Author: Amit Langote <amitlangote09@gmail.com>
Reviewed-by: Rafia Sabih <rafia.pghackers@gmail.com>
Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
Reviewed-by: Petr Jelinek <petr@2ndquadrant.com>
Discussion: https://www.postgresql.org/message-id/flat/CA+HiwqH=Y85vRK3mOdjEkqFK+E=ST=eQiHdpj43L=_eJMOOznQ@mail.gmail.com
0f5ca02f53 introduces 3 new keywords. It appears to be too much for relatively
small feature. Given now we past feature freeze, it's already late for
discussion of the new syntax. So, revert.
Discussion: https://postgr.es/m/28209.1586294824%40sss.pgh.pa.us
2nd pass of modifying various places which obtain the next power
of 2 of a number and make them use the new functions added in
f0705bb62.
In passing, also modify num_combinations(). This can be implemented
using simple bitshifting rather than looping.
Reviewed-by: John Naylor
Discussion: https://postgr.es/m/20200114173553.GE32763%40fetter.org