This commit prevents pg_basebackup from receiving backup_manifest file
when --no-manifest is specified. Previously, when pg_basebackup was
writing a tarfile to stdout, it tried to receive backup_manifest file even
when --no-manifest was specified, and reported an error.
Also remove unused -m option from pg_basebackup.
Also fix typo in BASE_BACKUP command documentation.
Author: Fujii Masao
Reviewed-by: Michael Paquier, Robert Haas
Discussion: https://postgr.es/m/01e3ed3a-8729-5aaa-ca84-e60e3ca59db8@oss.nttdata.com
A comment from the Berkeley days incorrectly claimed that the page
management code cares about the contents of the hole in the center of
the page (at least in the case of the left half of an nbtree page
split). Commit 8fa30f906be added an addendum that stated that the
original comment was "probably obsolete". It's definitely obsolete,
though, so remove the original comment plus the addendum.
Back in commits 1df92eeafe, f884a96819, and 592123efbb I used some
hackish code to set the script search path, unaware despite decades of
perl that there was a completely standard way to do this. This patch
changes those cases to use the standard perl FindBin package.
Nest the "update metapage as part of insert into root-like page" branch
inside the broader "insert into internal page" branch. This improves
readability.
Clearly it's not okay for nbtree to split a page that is the only page
on its level, and then find that it has to split the parent one level up
in turn. There is simply no code to handle the split_only_page case in
the _bt_insertonpg() "newitem won't fit" branch (only the "newitem fits"
branch handles split_only_page). Add a defensive assertion that will
fail if a split_only_page call to _bt_insertonpg() somehow ends up
splitting the target/parent page.
I (pgeoghegan) believe that we don't need split_only_page handling for
the "newitem won't fit" branch because anybody calling _bt_insertonpg()
like this would have to hold a lock on the same one and only child page.
It seems like a good idea for nbtree's retail insert code to be
absolutely consistent with nbtree's page split code for anything that
naturally requires equivalent handling. Anything that concerns
inserting newitem (which is handled as part of the page split atomic
action when a page split is required) should work in exactly the same
way. With that in mind, make _bt_insertonpg() handle 'cbuf' in a way
that matches _bt_split().
Now that warnings are enabled across the board, this code that tries to
print an undef variable emits one. Silently printing the empty string
achieves the previous behavior.
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
Discussion: https://postgr.es/m/E1jO1VT-0008Qk-TM@gemulon.postgresql.org
An nbtree split point can be thought of as a point between two adjoining
tuples from an imaginary version of the page being split that includes
the incoming/new item (in addition to the items that really are on the
page). These adjoining tuples are called the lastleft and firstright
tuples.
The variables that represent split points contained a field called
firstright, which is an offset number of the first data item from the
original page that goes on the new right page. The corresponding tuple
from origpage was usually the same thing as the actual firstright tuple,
but not always: the firstright tuple is sometimes the new/incoming item
instead. This situation seems unnecessarily confusing.
Make things clearer by renaming the origpage offset returned by
_bt_findsplitloc() to "firstrightoff". We now have a firstright tuple
and a firstrightoff offset number which are comparable to the
newitem/lastleft tuples and the newitemoff/lastleftoff offset numbers
respectively. Also make sure that we are consistent about how we
describe nbtree page split point state.
Push the responsibility for dealing with pg_upgrade'd !heapkeyspace
indexes down to lower level code, relieving _bt_split() from dealing
with it directly. This means that we always have a palloc'd left page
high key on the leaf level, no matter what. This enables simplifying
some of the code (and code comments) within _bt_split().
Finally, restructure the page split code to make it clearer why suffix
truncation (which only takes place during leaf page splits) is
completely different to the first data item truncation that takes place
during internal page splits. Tuples are marked as having fewer
attributes stored in both cases, and the firstright tuple is truncated
in both cases, so it's easy to imagine somebody missing the distinction.
This replaces a few occurrences of ugly code with a more clean and
idiomatic usage. The problem was highlighted by perlcritic, but we're
not enforcing the policy that led to the discovery.
Discussion: https://postgr.es/m/20200412074245.GB623763@rfd.leadboat.com
We've had a mixture of the warnings pragma, the -w switch on the shebang
line, and no warnings at all. This patch removes the -w swicth and add
the warnings pragma to all perl sources missing it. It raises the
severity of the TestingAndDebugging::RequireUseWarnings perlcritic
policy to level 5, so that we catch any future violations.
Discussion: https://postgr.es/m/20200412074245.GB623763@rfd.leadboat.com
This makes it easier to do a web search for details of the policy that's
been violated, as well as displaying the name that might be needed for a
policy override.
Various perlcritic settings changes are being discussed, but this one
should be uncontroversial.
We've long fought with the draconian space limitations of our
traditional table layout for describing SQL functions and operators.
This commit introduces a new approach, though so far I've only applied
it to a few of those tables. The new way makes use of DocBook's support
for different layouts in different rows of a table, and allows the
descriptions and examples for a function or operator to run to several
lines without as much ugliness and wasted space as before.
The core layout concept is now
Name Signature
Description
Example Example Result
so that a function or operator really has three table rows not one,
but we group them to look like one row by having the name column
have only one entry for all three rows. (Actually, there could be
four or more rows if you wanted to have more than one example, which
is another thing that was painful before but works easily now.)
This is handled by a "morerows" annotation on the name entry, which
isn't perfect (notably, the toolchain is not smart enough to avoid
breaking these row groups across PDF pages) but there seems no better
solution in DocBook. The name column is normally fairly narrow,
allowing plenty of space for the other column(s), and not wasting too
much space when one of the other components runs to multiple lines.
The varying row layout is managed by defining named "spans" and then
tagging entries with a "spanname" of "name", "sig", "desc", "example",
or "exresult". This provides a bit of semantic annotation to go with
the formatting improvement, which seems like a good thing. (It seems
that we have to re-define these spans afresh for each table, which is
annoying, but it's not any worse than the duplication involved in
the table headers. At least that gives us an opportunity to vary the
relative column widths per-table, which is handy since function tables
sometimes need much wider name columns than operator tables.)
Signature entries should be written in the style
<function>fname</function>(<type>typename</type> ...)
<returnvalue>typename</returnvalue>
The <returnvalue> tag produces a right arrow before the result type
name. (I'll document that convention in a user-visible place later.)
While this provides significantly more horizontal space than before
for examples, it's still true that PDF output is a lot narrower than
typical webpage viewing windows, so some examples need to be broken
in places where there is no whitespace. I've added &zwsp; markers in
suitable places to allow the tables to render warning-free in PDF.
I've so far converted only the date/time operator, date/time function,
and enum function tables in sections 9.9 and 9.10; these were chosen
to provide a reasonable sample of the formatting problems that need
to be solved. Assuming that this looks good on the website and doesn't
provoke howls of anguish, I'll work on the other similar tables in the
near future.
There's a moderate amount of new editorial content in this patch along
with the raw formatting changes; for instance I had to write text
descriptions for operators that lacked them. I failed to resist the
temptation to improve some other descriptions and examples, too.
Patch by me, with thanks to Alexander Lakhin for assistance with
figuring out some formatting issues.
Discussion: https://postgr.es/m/9326.1581457869@sss.pgh.pa.us
We already had a couple of places using zero-width spaces for formatting
hackery, and we're going to need more if we ever want the PDF manuals to
look decent. But please let's not write hard-coded Unicode escapes.
We can avoid that by using a custom entity, which also provides a place
to put a teeny bit of documentation about what it is and how to use it.
I'd previously posted a patch using "&break;" for this, but on reflection
that would be horrible to grep for. Instead let's use "&zwsp;", based
on the name of the Unicode symbol ("zero width space").
Discussion: https://postgr.es/m/9326.1581457869@sss.pgh.pa.us
The placement of the fall-through comment in this code appears not to
work to suppress the warning in recent gcc. Move it to the bottom of
the case group, and add an assertion that we didn't get there through
some other code path. Also improve wording of nearby comments.
Julien Rouhaud, comment hacking by me
Discussion: https://postgr.es/m/CAOBaU_aLdPGU5wCpaowNLF-Q8328iR7mj1yJAhMOVsdLwY+sdg@mail.gmail.com
Specifically, remember lookup_type_cache() results instead of retrieving
them once per comparison. Under CLOBBER_CACHE_ALWAYS, this reduced
src/test/subscription/t/001_rep_changes.pl elapsed time by an order of
magnitude, which reduced check-world elapsed time by 9%.
Discussion: https://postgr.es/m/20200406085420.GC162712@rfd.leadboat.com
Before sleeping, WalSndWaitForWal() sends a keepalive if MyWalSnd->write
< sentPtr. That is important in logical replication. When the latest
physical LSN yields no logical replication messages (a common case),
that keepalive elicits a reply, and processing the reply updates
pg_stat_replication.replay_lsn. WalSndLoop() lacks that; when
WalSndLoop() slept, replay_lsn advancement could stall until
wal_receiver_status_interval elapsed. This sometimes stalled
src/test/subscription/t/001_rep_changes.pl for up to 10s.
Discussion: https://postgr.es/m/20200406063649.GA3738151@rfd.leadboat.com
Before discarding the old hash table in ExecReScanHashJoin, capture
its statistics, ensuring that we report the maximum hashtable size
across repeated rescans of the hash input relation. We can repurpose
the existing code for reporting hashtable size in parallel workers
to help with this, making the patch pretty small. This also ensures
that if rescans happen within parallel workers, we get the correct
maximums across all instances.
Konstantin Knizhnik and Tom Lane, per diagnosis by Thomas Munro
of a trouble report from Alvaro Herrera.
Discussion: https://postgr.es/m/20200323165059.GA24950@alvherre.pgsql
ExecReScanHashJoin will destroy the join's hash table if it expects
that the inner relation will produce different rows on rescan.
Up to now it's not bothered to clear the additional pointer to that
hash table that exists in the child HashState node. However, it's
possible for the query to terminate without building a fresh hash
table (this happens if the outer relation is found to be empty
during the final rescan). So we can end with a dangling pointer
to a deleted hash table. That was harmless originally, but since
9.0 EXPLAIN ANALYZE has used that pointer to print hash table
statistics. In debug builds this reproducibly results in garbage
statistics. In non-debug builds there's frequently no ill effects,
but in principle one could get wrong EXPLAIN ANALYZE output, or
perhaps even a crash if free() has released the hashtable memory
back to the OS.
To fix, just make sure we clear the additional pointer when destroying
the hash table. In problematic cases, EXPLAIN ANALYZE will then print
no hashtable statistics (reverting to its pre-9.0 behavior). This isn't
ideal, but since the problem manifests only in unusual corner cases,
it's hard to justify taking any risks to do better in the back
branches. A follow-on patch will improve matters in HEAD.
Konstantin Knizhnik and Tom Lane, per diagnosis by Thomas Munro
of a trouble report from Alvaro Herrera.
Discussion: https://postgr.es/m/20200323165059.GA24950@alvherre.pgsql
Commit ac8623760 "fixed" a typo in an example of what would happen in
the event of a typo. Restore the original typo and add a comment about
its intentionality. Backpatch to 12 where the error was introduced.
Per report from irc user Nicolás Alvarez.
Add a DEBUG1 message indicating that verification of the index structure
is underway. Also reduce the severity level of the existing "tree
level" debug message to DEBUG1. It should never have been made DEBUG2.
Any B-Tree index with more than a couple of levels will generally also
have so many pages that the per-page DEBUG2 messages will become
completely unmanageable.
In passing, add a new "Tip" to the docs that advises users that run into
corruption that the debug messages might provide useful additional
context.
The docs explained that a SHARE ROW EXCLUSIVE lock is needed on the
referenced table, but failed to say the same about the table being
altered. Since the page says that ACCESS EXCLUSIVE lock is taken
unless otherwise stated, this left readers with the wrong conclusion.
Discussion: https://postgr.es/m/834603375.3470346.1586482852542@mail.yahoo.com
CREATE GROUP is an exact alias for CREATE ROLE, and CREATE USER is
almost an exact alias, as can easily be confirmed by checking the
code. So the man page syntax descriptions ought to match up. The
last few additions of role options seem to have forgotten to update
create_group.sgml, though. Fix that, and add a naggy reminder to
create_role.sgml in hopes of not forgetting again.
Discussion: https://postgr.es/m/158647836143.655.9853963229391401576@wrigleys.postgresql.org
Depending on specific values of restart_lsn or pg_current_wal_lsn()
is obviously unsafe. The previous coding tried to dodge this issue
by rounding off, but that's not good enough, as shown by multiple
buildfarm members. Nuke all the uses of these values except for
null-ness checks, pending some credible argument why we should think
something else could be usefully stable.
Kyotaro Horiguchi, further modified by me
Discussion: https://postgr.es/m/E1jM1Sa-0003mS-99@gemulon.postgresql.org
Suppress a probably-meaningless uninitialized-variable warning
(induced by my previous patch, I'm sorry to say).
Improve mark_hl_fragments()'s test for overlapping cover strings:
it failed to consider the possibility that the current string is
strictly within another one. That's unlikely given the preceding
splitting into MaxWords fragments, but I don't think it's impossible.
Discussion: https://postgr.es/m/16345-2e0cf5cddbdcd3b4@postgresql.org
This code could produce very poor results when asked to highlight a
string based on a query using phrase-match operators. The root cause
is that hlCover(), which is supposed to find a minimal substring that
matches the query, was written assuming that word position is not
significant. I'm only 95% convinced that its algorithm was correct even
for plain AND/OR queries; but it definitely fails completely for phrase
matches, causing it to possibly not identify a cover string at all.
Hence, rewrite hlCover() with a less-tense algorithm that just tries
all the possible substrings, earlier and shorter ones first. (This is
not as bad as it sounds performance-wise, because all of the string
matching has been done already: the repeated tsquery match checks
boil down to pointer comparisons.)
Unfortunately, since that approach produces more candidate cover
strings than before, it also exposes that there were bugs in the
heuristics in mark_hl_words() for selecting a best cover string.
Fixes there include:
* Do not apply the ShortWord filter to words that appear in the query.
* Remove a misguided optimization for quickly rejecting a cover.
* Fix order-of-operation bug that could cause computation of a
wrong figure of merit (poslen) when shortening a cover.
* Change the preference rule so that candidate headlines that do not
include their whole cover string (after MaxWords trimming) are lowest
priority, since they may not actually satisfy the user's query.
This results in some changes in existing regression test cases,
but they all seem reasonable. Note in particular that the tests
involving strings like "1 2 3" were previously being affected by
the ShortWord filter, masking the normal matching behavior.
Per bug #16345 from Augustinas Jokubauskas; the new test cases are
based on that example. Back-patch to 9.6 where phrase search was
added to tsquery.
Discussion: https://postgr.es/m/16345-2e0cf5cddbdcd3b4@postgresql.org
This code was woefully unreadable and under-commented. Try to improve
matters by adding comments, using some macros to make complicated
if-tests more readable, using boolean type where appropriate, etc.
There are a couple of tiny coding improvements too, but this commit
includes (I hope) no behavioral change.
Nonetheless, back-patch as far as 9.6, because a followup bug-fixing
commit depends on this.
Discussion: https://postgr.es/m/16345-2e0cf5cddbdcd3b4@postgresql.org
CREATE TABLE LIKE INCLUDING GENERATED would fail if a generated column
referred to a column with a higher attribute number. This is because
the column mapping mechanism created the mapping incrementally as
columns are added. This was sufficient for previous uses of that
mechanism (omitting dropped columns), and it also happened to work if
generated columns only referred to columns with lower attribute
numbers, but here it failed.
This fix is to build the attribute mapping in a separate loop before
processing the columns in detail.
Bug: #16342
Reported-by: Ethan Waldo <ewaldo@healthetechs.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
If there is already a backup_manifest file in the database cluster,
it belongs to the past backup that was used to start this server.
It is not correct for the backup being taken now. So this commit
changes pg_basebackup so that it always skips such backup_manifest
file. The backup_manifest file for the current backup will be injected
separately if users want it.
Author: Fujii Masao
Reviewed-by: Robert Haas
Discussion: https://postgr.es/m/78f76a3d-1a28-a97d-0394-5c96985dd1c0@oss.nttdata.com
Currently, we don't account for buffer usage incurred by parallel workers
for parallel create index. This commit allows each worker to record the
buffer usage stats and leader backend to accumulate that stats at the
end of the operation. This will allow pg_stat_statements to display
correct buffer usage stats for (parallel) create index command.
Reported-by: Julien Rouhaud
Author: Sawada Masahiko
Reviewed-by: Dilip Kumar, Julien Rouhaud and Amit Kapila
Backpatch-through: 11, where this was introduced
Discussion: https://postgr.es/m/20200328151721.GB12854@nol
The added note explains that the numbers of planning and execution in
the statement are not always expected to match because their statistics are
updated at their respective end phase, and only for successful operations.
Author: Pascal Legrand, Julien Rouhaud, tweaked a bit by Fujii Masao
Discussion: https://postgr.es/m/1585857868967-0.post@n3.nabble.com