API and mechanism to allow generic messages to be inserted into WAL that are
intended to be read by logical decoding plugins. This commit adds an optional
new callback to the logical decoding API.
Messages are either text or bytea. Messages can be transactional, or not, and
are identified by a prefix to allow multiple concurrent decoding plugins.
(Not to be confused with Generic WAL records, which are intended to allow crash
recovery of extensible objects.)
Author: Petr Jelinek and Andres Freund
Reviewers: Artur Zakirov, Tomas Vondra, Simon Riggs
Discussion: 5685F999.6010202@2ndquadrant.com
The restore() function assumed that the result of sprintf() with %e format
would necessarily contain an 'e', which is false: what if the supplied
number is an infinity or NaN? If that did happen, we'd get a
null-pointer-dereference core dump. The case appears impossible currently,
because seg_in() does not accept such values, and there are no seg-creating
functions that would create one. But it seems unwise to rely on it never
happening in future.
Quite aside from that, the code was pretty ugly: it relied on modifying a
static format string when it could use a "*" precision argument, and it
used strtok() entirely gratuitously, and it stripped off trailing spaces
by hand instead of just not asking for them to begin with.
Coverity noticed the potential null pointer dereference (though I wonder
why it didn't complain years ago, since this code is ancient).
Since this is just code cleanup and forestalling a hypothetical future
bug, there seems no need for back-patching.
The code was supposing that rd_amcache wouldn't disappear from under it
during a scan; which is wrong. Copy the data out of the relcache rather
than trying to reference it there.
Coverity complained about implicit sign-extension in the
BloomPageGetFreeSpace macro, probably because sizeOfBloomTuple isn't wide
enough for size calculations. No overflow is really possible as long as
maxoff and sizeOfBloomTuple are small enough to represent a realistic
situation, but it seems like a good idea to declare sizeOfBloomTuple as
Size not int32.
Add missing check on BloomPageAddItem() result, again from Coverity.
Avoid core dump due to not allocating so->sign array when
scan->numberOfKeys is zero. Also thanks to Coverity.
Use FLEXIBLE_ARRAY_MEMBER rather than declaring an array as size 1
when it isn't necessarily.
Very minor beautification of related code.
Unfortunately, none of the Coverity-detected mistakes look like they
could account for the remaining buildfarm unhappiness with this
module. It's barely possible that the FLEXIBLE_ARRAY_MEMBER mistake
does account for that, if it's enabling bogus compiler optimizations;
but I'm not terribly optimistic. We probably still have bugs to
find here.
Looking at result of buildfarm member jaguarundi it seems to me that
BloomOptions isn't inited sometime, but I don't see yet how it's possible.
Nevertheless, check of signature length's is missed, so, add
a limit of it. Also add missed GenericXLogAbort() in case of already
deleted page in vacuum + minor code refactoring.
Module provides new access method. It is actually a simple Bloom filter
implemented as pgsql's index. It could give some benefits on search
with large number of columns.
Module is a single way to test generic WAL interface committed earlier.
Author: Teodor Sigaev, Alexander Korotkov
Reviewers: Aleksander Alekseev, Michael Paquier, Jim Nasby
Commit fbe5a3fb73102c2cfec11aaaa4a67943f4474383 accidentally changed
this behavior; put things back the way they were, and add some
regression tests.
Report by Andres Freund; patch by Ashutosh Bapat, with a bit of
kibitzing by me.
brin_page_type() and brin_metapage_info() did not enforce being called
by superuser, like other pageinspect functions that take bytea do.
Since they don't verify the passed page thoroughly, it is possible to
use them to read the server memory with a carefully crafted bytea value,
up to a file kilobytes from where the input bytea is located.
Have them throw errors if called by a non-superuser.
Report and initial patch: Andreas Seltenreich
Security: CVE-2016-3065
A join clause might mention multiple relations on either side, so it
need not be the case that a given joinrel's constituent relations are
all on one side of the join clause or all on the other.
Report by Rajkumar Raghuwanshi. Analysis and fix by Michael Paquier
and Ashutosh Bapat.
The two get_tle_by_resno() calls introduced by this commit lacked any
check for a NULL return, unlike any other calls of that function anywhere
in our tree. Coverity quite properly complained about it. Also fix a
misindented line in process_query_params(), which Coverity also complained
about on the grounds that the bad indentation suggested possible programmer
misinterpretation.
postgres_fdw can now sent an UPDATE or DELETE statement directly to
the foreign server in simple cases, rather than sending a SELECT FOR
UPDATE statement and then updating or deleting rows one-by-one.
Etsuro Fujita, reviewed by Rushabh Lathia, Shigeru Hanada, Kyotaro
Horiguchi, Albe Laurenz, Thom Brown, and me.
Deprecated set_limit() is modified to use SetConfigOption() to set
similarity_threshold which is actually an instance of
pg_trgm.similarity_threshold GUC variable. Previous coding directly sets
similarity_threshold what could cause an inconsistency between states of
actual variable and GUC representation.
Per gripe from Tom Lane
Patch introduces a concept of similarity over string and just a word from
another string.
Version of extension is not changed because 1.2 was already introduced in 9.6
release cycle, so, there wasn't a public version.
Author: Alexander Korotkov, Artur Zakirov
Use GUC variable pg_trgm.similarity_threshold insead of
set_limit()/show_limit() which was introduced when defining GUC varuables
by modules was absent.
Author: Artur Zakirov
There's no reason for this function to do this for every other
attribute number and omit it for CTID, especially since
conversion_error_callback has code to handle that case. This seems
to be an oversight in commit e690b9515072fd7767fdeca5c54166f6a77733bc.
Etsuro Fujita
Although the default choice of rel->reltarget should typically be
sufficient for scan or join paths, it's not at all sufficient for the
purposes PathTargets were invented for; in particular not for
upper-relation Paths. So break API compatibility by adding a PathTarget
argument to create_foreignscan_path(). To ease updating of existing
code, accept a NULL value of the argument as selecting rel->reltarget.
In commit 19a541143a09c067 I did not make PathTarget a subtype of Node,
and embedded a RelOptInfo's reltarget directly into it rather than having
a separately-allocated Node. In hindsight that was misguided
micro-optimization, enabled by the fact that at that point we didn't have
any Paths with custom PathTargets. Now that PathTarget processing has
been fleshed out some more, it's easier to see that it's better to have
PathTarget as an indepedent Node type, even if it does cost us one more
palloc to create a RelOptInfo. So change it while we still can.
This commit just changes the representation, without doing anything more
interesting than that.
This patch widens SPI_processed, EState's es_processed field, PortalData's
portalPos field, FuncCallContext's call_cntr and max_calls fields,
ExecutorRun's count argument, PortalRunFetch's result, and the max number
of rows in a SPITupleTable to uint64, and deals with (I hope) all the
ensuing fallout. Some of these values were declared uint32 before, and
others "long".
I also removed PortalData's posOverflow field, since that logic seems
pretty useless given that portalPos is now always 64 bits.
The user-visible results are that command tags for SELECT etc will
correctly report tuple counts larger than 4G, as will plpgsql's GET
GET DIAGNOSTICS ... ROW_COUNT command. Queries processing more tuples
than that are still not exactly the norm, but they're becoming more
common.
Most values associated with FETCH/MOVE distances, such as PortalRun's count
argument and the count argument of most SPI functions that have one, remain
declared as "long". It's not clear whether it would be worth promoting
those to int64; but it would definitely be a large dollop of additional
API churn on top of this, and it would only help 32-bit platforms which
seem relatively less likely to see any benefit.
Andreas Scherbaum, reviewed by Christian Ullrich, additional hacking by me
New configuration parameter auto_explain.sample_ratio makes it
possible to log just a fraction of the queries meeting the configured
threshold, to reduce the amount of logging.
Author: Craig Ringer and Julien Rouhaud
Review: Petr Jelinek
In commit 1d97c19a0f748e94 and later c1d9579dd8bf3c92, we extended
pull_var_clause's API by adding enum-type arguments. That's sort of a pain
to maintain, though, because it means every time we add a new behavior we
must touch every last one of the call sites, even if there's a reasonable
default behavior that most of them could use. Let's switch over to using a
bitmask of flags, instead; that seems more maintainable and might save a
nanosecond or two as well. This commit changes no behavior in itself,
though I'm going to follow it up with one that does add a new behavior.
In passing, remove flatten_tlist(), which has not been used since 9.1
and would otherwise need the same API changes.
Removing these enums means that optimizer/tlist.h no longer needs to
depend on optimizer/var.h. Changing that caused a number of C files to
need addition of #include "optimizer/var.h" (probably we can thank old
runs of pgrminclude for that); but on balance it seems like a good change
anyway.
Renaming a file using rename(2) is not guaranteed to be durable in face
of crashes. Use the previously added durable_rename()/durable_link_or_rename()
in various places where we previously just renamed files.
Most of the changed call sites are arguably not critical, but it seems
better to err on the side of too much durability. The most prominent
known case where the previously missing fsyncs could cause data loss is
crashes at the end of a checkpoint. After the actual checkpoint has been
performed, old WAL files are recycled. When they're filled, their
contents are fdatasynced, but we did not fsync the containing
directory. An OS/hardware crash in an unfortunate moment could then end
up leaving that file with its old name, but new content; WAL replay
would thus not replay it.
Reported-By: Tomas Vondra
Author: Michael Paquier, Tomas Vondra, Andres Freund
Discussion: 56583BDD.9060302@2ndquadrant.com
Backpatch: All supported branches
pgcrypto already supports key-stretching during symmetric encryption,
including the salted-and-iterated method; but the number of iterations
was not configurable. This commit implements a new s2k-count parameter
to pgp_sym_encrypt() which permits selecting a larger number of
iterations.
Author: Jeff Janes
Commit ccd8f97922944566d26c7d90eb67ab7848ee9905 gave us the ability to
request that the remote side sort the data, and, later, commit
e4106b2528727c4b48639c0e12bf2f70a766b910 gave us the ability to
request that the remote side perform the join for us rather than doing
it locally. But we could not do both things at the same time: a
remote SQL query that had an ORDER BY clause would never be a join.
This commit adds that capability.
Ashutosh Bapat, reviewed by me.
ltree/ltree_gist/ltxtquery's headers stores data at MAXALIGN alignment,
requiring some padding bytes. So far we left these uninitialized. Zero
those by using palloc0.
Author: Andres Freund
Reported-By: Andres Freund / valgrind / buildarm animal skink
Backpatch: 9.1-
This lets you examine the visibility map as well as page-level
visibility information. I initially wrote it as a debugging aid,
but was encouraged to polish it for commit.
Patch by me, reviewed by Masahiko Sawada.
Discussion: 56D77803.6080503@BlueTreble.com
When decoding the old version of an UPDATE or DELETE change, and if that
tuple was bigger than MaxHeapTupleSize, we either Assert'ed out, or
failed in more subtle ways in non-assert builds. Normally individual
tuples aren't bigger than MaxHeapTupleSize, with big datums toasted.
But that's not the case for the old version of a tuple for logical
decoding; the replica identity is logged as one piece. With the default
replica identity btree limits that to small tuples, but that's not the
case for FULL.
Change the tuple buffer infrastructure to separate allocate over-large
tuples, instead of always going through the slab cache.
This unfortunately requires changing the ReorderBufferTupleBuf
definition, we need to store the allocated size someplace. To avoid
requiring output plugins to recompile, don't store HeapTupleHeaderData
directly after HeapTupleData, but point to it via t_data; that leaves
rooms for the allocated size. As there's no reason for an output plugin
to look at ReorderBufferTupleBuf->t_data.header, remove the field. It
was just a minor convenience having it directly accessible.
Reported-By: Adam Dratwiński
Discussion: CAKg6ypLd7773AOX4DiOGRwQk1TVOQKhNwjYiVjJnpq8Wo+i62Q@mail.gmail.com
Somehow I managed to flip the order of restoring old & new tuples when
de-spooling a change in a large transaction from disk. This happens to
only take effect when a change is spooled to disk which has old/new
versions of the tuple. That only is the case for UPDATEs where he
primary key changed or where replica identity is changed to FULL.
The tests didn't catch this because either spooled updates, or updates
that changed primary keys, were tested; not both at the same time.
Found while adding tests for the following commit.
Backpatch: 9.4, where logical decoding was added
Logical decoding's reorderbuffer keeps transactions in an LSN ordered
list for efficiency. To make that's efficiently possible upper-level
xids are forced to be logged before nested subtransaction xids. That
only works though if these records are all looked at: Unfortunately we
didn't do so for e.g. row level locks, which are otherwise uninteresting
for logical decoding.
This could lead to errors like:
"ERROR: subxact logged without previous toplevel record".
It's not sufficient to just look at row locking records, the xid could
appear first due to a lot of other types of records (which will trigger
the transaction to be marked logged with MarkCurrentTransactionIdLoggedIfAny).
So invent infrastructure to tell reorderbuffer about xids seen, when
they'd otherwise not pass through reorderbuffer.c.
Reported-By: Jarred Ward
Bug: #13844
Discussion: 20160105033249.1087.66040@wrigleys.postgresql.org
Backpatch: 9.4, where logical decoding was added
Previously, we included NULLS FIRST when appropriate but relied on the
default behavior to be NULLS LAST. This is, however, not true for a
sort in descending order and seems like a fragile assumption anyway.
Report by Rajkumar Raghuwanshi. Patch by Ashutosh Bapat. Review
comments from Michael Paquier and Tom Lane.
Otherwise running installcheck-force on a server with
synchronous_commit=off will result in the tests failing. All the other
tests already do so...
Backpatch: 9.4, where logical decoding was added
When adding replication origins in 5aa235042, I somehow managed to set
the timestamp of decoded transactions to InvalidXLogRecptr when decoding
one made without a replication origin. Fix that, and the wrong type of
the new commit_time variable.
This didn't trigger a regression test failure because we explicitly
don't show commit timestamps in the regression tests, as they obviously
are variable. Add a test that checks that a decoded commit's timestamp
is within minutes of NOW() from before the commit.
Reported-By: Weiping Qu
Diagnosed-By: Artur Zakirov
Discussion: 56D4197E.9050706@informatik.uni-kl.de,
56D42918.1010108@postgrespro.ru
Backpatch: 9.5, where 5aa235042 originates.
The new bit indicates whether every tuple on the page is already frozen.
It is cleared only when the all-visible bit is cleared, and it can be
set only when we vacuum a page and find that every tuple on that page is
both visible to every transaction and in no need of any future
vacuuming.
A future commit will use this new bit to optimize away full-table scans
that would otherwise be triggered by XID wraparound considerations. A
page which is merely all-visible must still be scanned in that case, but
a page which is all-frozen need not be. This commit does not attempt
that optimization, although that optimization is the goal here. It
seems better to get the basic infrastructure in place first.
Per discussion, it's very desirable for pg_upgrade to automatically
migrate existing VM forks from the old format to the new format. That,
too, will be handled in a follow-on patch.
Masahiko Sawada, reviewed by Kyotaro Horiguchi, Fujii Masao, Amit
Kapila, Simon Riggs, Andres Freund, and others, and substantially
revised by me.
This is basically a bug fix; the old code assumes that a ForeignScan
is always parallel-safe, but for postgres_fdw, for example, this is
definitely false. It should be true for file_fdw, though, since a
worker can read a file from the filesystem just as well as any other
backend process.
Original patch by Thomas Munro. Documentation, and changes to the
comments, by me.