TimelineHistoryRead and TimelineHistoryWrite wait events are reported
during waiting for a read and write of a timeline history file, respectively.
However, previously, TimelineHistoryRead wait event was not reported
while readTimeLineHistory() was reading a timeline history file. Also
TimelineHistoryWrite was not reported while writeTimeLineHistory() was
writing one line with the details of the timeline split, at the end.
This commit fixes these issues.
Back-patch to v10 where wait events for a timeline history file was added.
Author: Masahiro Ikeda
Reviewed-by: Michael Paquier, Fujii Masao
Discussion: https://postgr.es/m/d11b0c910b63684424e06772eb844ab5@oss.nttdata.com
Factor out code common to _bt_lock_branch_parent() and _bt_pagedel()
into a new utility function. This new function is used to check that
the left sibling of a deletion target page does not have the
INCOMPLETE_SPLIT page flag set. If it is set then deletion is unsafe;
there won't be a usable pivot tuple (with a downlink) in the parent page
that points to the deletion target page. The page deletion algorithm is
not prepared to deal with that. Also restructure an existing, related
utility function that checks if the right sibling of the target page has
the ISHALFDEAD page flag set.
This organization highlights the symmetry between the two cases. The
goal is to make the design of page deletion clearer. Both functions
involve a sibling page with a flag that indicates that there was an
interrupted operation (a page split or a page deletion) that resulted in
a page pointed to by sibling pages, but not pointed to in the parent.
And, both functions indicate if page deletion is unsafe due to the
absence of a particular downlink in the parent page.
checkcondition_str() failed to report multiple matches for a prefix
pattern correctly: it would dutifully merge the match positions, but
then after exiting that loop, if the last prefix-matching word had
had no suitable positions, it would report there were no matches.
The upshot would be failing to recognize a match that the query
should match.
It looks like you need all of these conditions to see the bug:
* a phrase search (else we don't ask for match position details)
* a prefix search item (else we don't get to this code)
* a weight restriction (else checkclass_str won't fail)
Noted while investigating a problem report from Pavel Borisov,
though this is distinct from the issue he was on about.
Back-patch to 9.6 where phrase search was added.
This converts the contrib documentation to the new style, and mops up
a couple of function tables that were outside chapter 9 in the main
docs.
A few contrib modules choose not to present their functions in the
standard tabular format. There might be room to rethink those decisions
now that the standard format is more friendly to verbose descriptions.
But I have not undertaken to do that here; I just converted existing
tables.
In the wake of commit f21599311, we don't need to set table columns'
align specs retail. Undo a few such settings I'd added in commit
5545b69ae. (The column width adjustments stay, though.)
We were acquiring object locks then deleting objects one by one, instead
of acquiring all object locks first, ignoring those that did not exist,
and then deleting all objects together. The latter is the correct
protocol to use, and what this commits changes to code to do. Failing
to follow that leads to "cache lookup failed for relation XYZ" error
reports when DROP OWNED runs concurrently with other DDL -- for example,
a session termination that removes some temp tables.
Author: Álvaro Herrera
Reported-by: Mithun Chicklore Yogendra (Mithun CY)
Reviewed-by: Ahsan Hadi, Tom Lane
Discussion: https://postgr.es/m/CADq3xVZTbzK4ZLKq+dn_vB4QafXXbmMgDP3trY-GuLnib2Ai1w@mail.gmail.com
I concluded that we really just ought to force all tables in PDF output
to default to "left" alignment (instead of "justify"); that is what the
HTML toolchain does and that's what most people have been designing the
tables to look good with. There are few if any places where "justify"
produces better-looking output, and there are many where it looks
horrible. So change stylesheet-fo.xsl to make that true.
Also tweak column widths in a few more tables to make them look better
and avoid "exceed the available area" warnings. This commit fixes
basically everything that can be fixed through that approach. The
remaining tables that give warnings either are scheduled for redesign
as per recent discussions, or need a fundamental rethink because they
Just Don't Work in a narrow view.
Attempting to use an installation path of Python that includes spaces
caused the MSVC builds to fail. This fixes the issue by using the same
quoting method as ad7595b for OpenSSL.
Author: Victor Wagner
Discussion: https://postgr.es/m/20200430150608.6dc6b8c4@antares.wagner.home
Backpatch-through: 9.5
Moving this setting into the main configuration file was ill-considered,
perhaps, because that typically causes it to be set before
timezone_abbreviations has been set. Which in turn means that zone
abbreviations don't work, only full zone names.
We could imagine hacking things so that such cases do work, but the
stability of the hack would be questionable, and the value isn't really
that high. Instead just document that you should use a numeric zone
offset or a full zone name.
Per bug #16404 from Reijo Suhonen.
Back-patch to v12 where this was changed.
Discussion: https://postgr.es/m/16404-4603a99603fbd04c@postgresql.org
Both the backend and libpq leaked buffers containing encrypted data
to be transmitted, so that the process size would grow roughly as
the total amount of data sent.
There were also far-less-critical leaks of the same sort in GSSAPI
session establishment.
Oversight in commit b0b39f72b, which I failed to notice while
reviewing the code in 2c0cdc818.
Per complaint from pmc@citylink.
Back-patch to v12 where this code was introduced.
Discussion: https://postgr.es/m/20200504115649.GA77072@gate.oper.dinoex.org
The following docs are updated:
- High-availaility section
- pg_basebackup
- pg_receivewal
Per the principle of least privilege, we want to encourage users to
interact with those areas using roles that have replication rights, but
superusers were mentioned first.
Author: Daniel Gustafsson
Reviewed-by: Fujii Masao, Michael Paquier
Discussion: https://postgr.es/m/ECEBD212-7101-41EB-84F3-2F356E4B6401@yesql.se
In commit 33e05f89c5, we have added the option to display WAL usage
statistics in Explain and auto_explain. The display format used two spaces
between each field which is inconsistent with Buffer usage statistics which
is using one space between each field. Change the format to make WAL usage
statistics consistent with Buffer usage statistics.
This commit also changed the usage of "full page writes" to
"full page images" for WAL usage statistics to make it consistent with
other parts of code and docs.
Author: Julien Rouhaud, Amit Kapila
Reviewed-by: Justin Pryzby, Kyotaro Horiguchi and Amit Kapila
Discussion: https://postgr.es/m/CAB-hujrP8ZfUkvL5OYETipQwA=e3n7oqHFU=4ZLxWS_Cza3kQQ@mail.gmail.com
The PDF toolchain defaults to laying out all columns of a table with
equal widths, in contrast to the HTML rendering which automatically
varies the column widths to fit the data. In many places, this
results in very badly laid-out tables, with lots of useless whitespace
in some places and text that overruns its cell in other places.
For tables that have reasonably static content, we can improve
matters by adding <colspec> entries to hand-assign the column widths.
This commit does that for a few of the tables that were worst off;
it eliminates close to 200 "contents ... exceed the available area"
warnings in an A4 PDF build.
I also forced align="left" in these tables, overriding the PDF
toolchain's default which is evidently "justify". (The HTML toolchain
seems to default to that already.) Anyplace where things are tight
enough that we need to worry about this, forced justification tends to
look truly awful.
Add a test case to contrib/amcheck that creates coverage of code paths
that are used to verify posting list tuples (tuples created when
deduplication merges together existing tuples to avoid a leaf page
split).
We had a mishmash of <replaceable>, <replaceable class="parameter">,
and <parameter> markup for operator/function arguments. Use <parameter>
consistently for things that are in fact names of parameters (including
OUT parameters), reserving <replaceable> for things that aren't. The
latter class includes some made-up-by-the-docs type class names, like
"numeric_type", as well as placeholders for arguments that don't have
well-defined types. Possibly we could do better with those categories
as well, but for the moment I'm content not to have parameter names
marked up in different ways in different places.
(This commit aligns the earlier sections of chapter 9 with a policy
that I'd arrived at while working on commit 1ad23335f, which is why
the last few sections need no changes.)
Remove one of the arguments to btvacuumpage(), and give up on the idea
that it's a recursive function. We now use the term "backtracking" to
refer to the case where an earlier block must be visited to make sure no
tuples that need to be removed were missed.
Advertising btvacuumpage() as a recursive function was unhelpful. In
reality the function always simulates recursion with a loop (it doesn't
actually call itself). This wasn't just necessary as a precaution (per
the comments mentioning tail recursion), though. There is no reliable
natural limit on the number of times we can backtrack.
There are important behavioral difference when "recursing"/backtracking,
mostly related to page deletion. We don't perform page deletion when
backtracking due to the extra complexity. And when we recurse, we're
not performing a physical order scan anymore, so we expect fairly
different conditions to hold for the page. Structuring the code like
this makes it clearer how _bt_pagedel() cooperates with btvacuumpage()
and btvacuumscan() (as established in commit b0229f26 and commit
73a076b0).
Author: Peter Geoghegan
Reviewed-By: Masahiko Sawada
Discussion: https://postgr.es/m/CAH2-WzmRGMDWiLMcb+zagG9652PboNN4Gfcq1Gc_wJL6A716MA@mail.gmail.com
If the client is compiled with GSSAPI support and tries to start up GSS
with the server, but the server is not compiled with GSSAPI support, we
would mistakenly end up falling through to call ProcessStartupPacket
with secure_done = true, but the client might then try to perform SSL,
which the backend wouldn't understand and we'd end up failing the
connection with:
FATAL: unsupported frontend protocol 1234.5679: server supports 2.0 to 3.0
Fix by arranging to track ssl_done independently from gss_done, instead
of trying to use the same boolean for both.
Author: Andrew Gierth
Discussion: https://postgr.es/m/87h82kzwqn.fsf@news-spur.riddles.org.uk
Backpatch: 12-, where GSSAPI encryption was added.
Writing a trailing semicolon in a macro is almost never the right thing,
because you almost always want to write a semicolon after each macro
call instead. (Even if there was some reason to prefer not to, pgindent
would probably make a hash of code formatted that way; so within PG the
rule should basically be "don't do it".) Thus, if we have a semi inside
the macro, the compiler sees "something;;". Much of the time the extra
empty statement is harmless, but it could lead to mysterious syntax
errors at call sites. In perhaps an overabundance of neatnik-ism, let's
run around and get rid of the excess semicolons whereever possible.
The only thing worse than a mysterious syntax error is a mysterious
syntax error that only happens in the back branches; therefore,
backpatch these changes where relevant, which is most of them because
most of these mistakes are old. (The lack of reported problems shows
that this is largely a hypothetical issue, but still, it could bite
us in some future patch.)
John Naylor and Tom Lane
Discussion: https://postgr.es/m/CACPNZCs0qWTqJ2QUSGJ07B7uvAvzMb-KbG2q+oo+J3tsWN5cqw@mail.gmail.com
On further reflection, code comments added by commit b0229f26 slightly
misrepresented how we determine the oldest bpto.xact for the index.
btvacuumpage() does not treat the bpto.xact of a page that it put in the
FSM as a candidate to be the oldest deleted page (the delete-marked page
that has the oldest bpto.xact XID among all pages encountered).
The definition of a deleted page for the purposes of the bpto.xact
calculation is different from the definition used by the bulk delete
statistics. The bulk delete statistics don't distinguish between pages
that were deleted by the current VACUUM, pages deleted by a previous
VACUUM operation but not yet recyclable/reusable, and pages that are
reusable (though reusable pages are counted separately).
Backpatch: 11-, just like commit b0229f26.
The logic for determining how many nbtree pages in an index are deleted
pages sometimes undercounted pages. Pages that were deleted by the
current VACUUM operation (as opposed to some previous VACUUM operation
whose deleted pages have yet to be reused) were sometimes overlooked.
The final count is exposed to users through VACUUM VERBOSE's "%u index
pages have been deleted" output.
btvacuumpage() avoided double-counting when _bt_pagedel() deleted more
than one page by assuming that only one page was deleted, and that the
additional deleted pages would get picked up during a future call to
btvacuumpage() by the same VACUUM operation. _bt_pagedel() can
legitimately delete pages that the btvacuumscan() scan will not visit
again, though, so that assumption was slightly faulty.
Fix the accounting by teaching _bt_pagedel() about its caller's
requirements. It now only reports on pages that it knows btvacuumscan()
won't visit again (including the current btvacuumpage() page), so
everything works out in the end.
This bug has been around forever. Only backpatch to v11, though, to
keep _bt_pagedel() is sync on the branches that have today's bugfix
commit b0229f26da. Note that this commit changes the signature of
_bt_pagedel(), just like commit b0229f26da.
Author: Peter Geoghegan
Reviewed-By: Masahiko Sawada
Discussion: https://postgr.es/m/CAH2-WzkrXBcMQWAYUJMFTTvzx_r4q=pYSjDe07JnUXhe+OZnJA@mail.gmail.com
Backpatch: 11-
Commit 857f9c36cda (which taught nbtree VACUUM to skip a scan of the
index from btcleanup in situations where it doesn't seem worth it) made
VACUUM maintain the oldest btpo.xact among all deleted pages for the
index as a whole. It failed to handle all the details surrounding pages
that are deleted by the current VACUUM operation correctly (though pages
deleted by some previous VACUUM operation were processed correctly).
The most immediate problem was that the special area of the page was
examined without a buffer pin at one point. More fundamentally, the
handling failed to account for the full range of _bt_pagedel()
behaviors. For example, _bt_pagedel() sometimes deletes internal pages
in passing, as part of deleting an entire subtree with btvacuumpage()
caller's page as the leaf level page. The original leaf page passed to
_bt_pagedel() might not be the page that it deletes first in cases where
deletion can take place.
It's unclear how disruptive this bug may have been, or what symptoms
users might want to look out for. The issue was spotted during
unrelated code review.
To fix, push down the logic for maintaining the oldest btpo.xact to
_bt_pagedel(). btvacuumpage() is now responsible for pages that were
fully deleted by a previous VACUUM operation, while _bt_pagedel() is now
responsible for pages that were deleted by the current VACUUM operation
(this includes half-dead pages from a previous interrupted VACUUM
operation that become fully deleted in _bt_pagedel()). Note that
_bt_pagedel() should never encounter an existing deleted page.
This commit theoretically breaks the ABI of a stable release by changing
the signature of _bt_pagedel(). However, if any third party extension
is actually affected by this, then it must already be completely broken
(since there are numerous assumptions made in _bt_pagedel() that cannot
be met outside of VACUUM). It seems highly unlikely that such an
extension actually exists, in any case.
Author: Peter Geoghegan
Reviewed-By: Masahiko Sawada
Discussion: https://postgr.es/m/CAH2-WzkrXBcMQWAYUJMFTTvzx_r4q=pYSjDe07JnUXhe+OZnJA@mail.gmail.com
Backpatch: 11-, where the "skip full scan" feature was introduced.