pg_upgrade previously copied pg_largeobject_metadata over from the old
cluster. That doesn't work, because the table has oids before
578b229718. I missed that.
As most pieces of metadata for large objects already were dumped as
DDL (except for comments overwritten by pg_upgrade, due to the copy of
pg_largeobject_metadata) it seems reasonable to just also dump grants
for large objects. If we ever consider this a relevant performance
problem, we'd need to fix the rest of the already emitted DDL
too.
There's still an open discussion about whether we'll want to force a
specific ordering for the dumped objects, as currently
pg_largeobjects_metadata potentially has a different ordering
before/after pg_upgrade, which can make automated testing a bit
harder.
Reported-By: Andrew Dunstan
Author: Andres Freund
Discussion: https://postgr.es/m/91a8a980-41bc-412b-fba2-2ba71a141c2b@2ndQuadrant.com
latex_escaped_print() mistranslated \ and failed to provide any translation
for # ^ and ~, all of which would typically lead to LaTeX document syntax
errors. In addition it didn't translate < > and |, which would typically
render as unexpected characters.
To some extent this represents shortcomings in ancient versions of LaTeX,
which if memory serves had no easy way to render these control characters
as ASCII text. But that's been fixed for, um, decades. In any case there
is no value in emitting guaranteed-to-fail output for these characters.
Noted while fooling with test cases added by commit 9a98984f4. Back-patch
the code change to all supported versions.
I'd forgotten that in the buildfarm, parts of the regression tests
may run with psql exposed to a non-default LC_NUMERIC setting.
Hence we can't assume that C locale prevails, nor is there any
accessible way to force the setting for this single test step.
Lobotomize the test case added by commit 9a98984f4 so that it covers as
much as we can of print.c without having any locale-varying output.
"\pset format csv", or --csv, selects comma-separated values table format.
This is compliant with RFC 4180, except that we aren't too picky about
whether the record separator is LF or CRLF; also, the user may choose a
field separator other than comma.
This output format is directly compatible with the server's COPY CSV
format, and will also be useful as input to other programs. It's
considerably safer for that purpose than the old recommendation to
use "unaligned" format, since the latter couldn't handle data
containing the field separator character.
Daniel Vérité, reviewed by Fabien Coelho and David Fetter, some
tweaking by me
Discussion: https://postgr.es/m/a8de371e-006f-4f92-ab72-2bbe3ee78f03@manitou-mail.org
As penance for the "\pset format latex" silliness, add some regression
test coverage for the off-the-beaten-path output formats, which formerly
had exactly no coverage, except for some poorly-thought-out (unreadable,
repetitive, and incomplete) tests for asciidoc format.
I make no claims for the behavior exposed here actually being correct;
these test cases are just designed to ensure full code coverage in
fe_utils/print.c. This brings the line coverage for that file up
from ~60% to ~93%.
Commit eaf746a5b unintentionally made psql's "latex" output format
inaccessible, since not only "latex" but all abbreviations of it
were considered ambiguous against "latex-longtable". Let's go
back to the longstanding behavior that all shortened versions
mean "latex", and you have to write at least "latex-" to get
"latex-longtable". This leaves the only difference from pre-v12
behavior being that "\pset format a" is considered ambiguous.
The fact that the regression tests didn't expose this is pretty bad,
but fixing it is material for a separate commit.
Discussion: https://postgr.es/m/cb7e1caf-3ea6-450d-af28-f524903a030c@manitou-mail.org
A set of failures in buildfarm machines are proving that this is not
quite ready yet because of another set of issues:
- MSVC scripts assume that REGRESS_OPTS can only use top_builddir. Some
test suites actually finish by using top_srcdir, like pg_stat_statements
which cause the regression tests to never run.
- Trying to enforce top_builddir does not work either when using VPATH
as this is not recognized properly.
- TAP tests of bloom are unstable on various platforms, causing various
failures.
The following options are added for extensions:
- TAP_TESTS, to allow an extention to run TAP tests which are the ones
present in t/*.pl. A subset of tests can always be run with the
existing PROVE_TESTS for developers.
- ISOLATION, to define a list of isolation tests.
- ISOLATION_OPTS, to pass custom options to isolation_tester.
A couple of custom Makefile targets have been accumulated across the
tree to cover the lack of facility in PGXS for a couple of releases when
using those test suites, which are all now replaced with the new flags,
without reducing the test coverage. This also fixes an issue with
contrib/bloom/, which had a custom target to trigger its TAP tests of
its own not part of the main check runs.
Author: Michael Paquier
Reviewed-by: Adam Berlin, Álvaro Herrera, Tom Lane, Nikolay Shaplov,
Arthur Zakirov
Discussion: https://postgr.es/m/20180906014849.GG2726@paquier.xyz
recovery.conf settings are now set in postgresql.conf (or other GUC
sources). Currently, all the affected settings are PGC_POSTMASTER;
this could be refined in the future case by case.
Recovery is now initiated by a file recovery.signal. Standby mode is
initiated by a file standby.signal. The standby_mode setting is
gone. If a recovery.conf file is found, an error is issued.
The trigger_file setting has been renamed to promote_trigger_file as
part of the move.
The documentation chapter "Recovery Configuration" has been integrated
into "Server Configuration".
pg_basebackup -R now appends settings to postgresql.auto.conf and
creates a standby.signal file.
Author: Fujii Masao <masao.fujii@gmail.com>
Author: Simon Riggs <simon@2ndquadrant.com>
Author: Abhijit Menon-Sen <ams@2ndquadrant.com>
Author: Sergei Kornilov <sk@zsrv.org>
Discussion: https://www.postgresql.org/message-id/flat/607741529606767@web3g.yandex.ru/
Commit cfdf4dc4 added an assertion that every WaitLatch() or similar
handles postmaster death. One place did not, but was missed in
review and testing due to the need for an SSL connection. Fix, by
asking for WL_EXIT_ON_PM_DEATH.
Reported-by: Christoph Berg
Discussion: https://postgr.es/m/20181124143845.GA15039%40msg.df7cb.de
Early returns from the buildfarm say that most critters are good with
commit cbdb8b4c0, but gaur gives unexpected results with the test case
involving a float8 that's one-ULP-less-than-2^63. It appears that that
platform's version of rint() rounds that value up to 2^63 instead of
leaving it be. This is possibly a bug, and it's also possible that no
other platform anybody is using anywhere behaves likewise. Still, the
point of the test is not to insist that everybody's rint() behaves exactly
the same. Let's use two-ULPs-less-than-2^63 instead, which I've tested
to act the same on gaur as on more modern hardware.
(This is, more or less, exactly the portability issue I'd feared might
arise...)
Discussion: https://postgr.es/m/15519-4fc785b483201ff1@postgresql.org
ftoi4 and its sibling coercion functions did their overflow checks in
a way that looked superficially plausible, but actually depended on an
assumption that the MIN and MAX comparison constants can be represented
exactly in the float4 or float8 domain. That fails in ftoi4, ftoi8,
and dtoi8, resulting in a possibility that values near the MAX limit will
be wrongly converted (to negative values) when they need to be rejected.
Also, because we compared before rounding off the fractional part,
the other three functions threw errors for values that really ought
to get rounded to the min or max integer value.
Fix by doing rint() first (requiring an assumption that it handles
NaN and Inf correctly; but dtoi8 and ftoi8 were assuming that already),
and by comparing to values that should coerce to float exactly, namely
INTxx_MIN and -INTxx_MIN. Also remove some random cosmetic discrepancies
between these six functions.
Per bug #15519 from Victor Petrovykh. This should get back-patched,
but first let's see what the buildfarm thinks of it --- I'm not too
sure about portability of some of the regression test cases.
Patch by me; thanks to Andrew Gierth for analysis and discussion.
Discussion: https://postgr.es/m/15519-4fc785b483201ff1@postgresql.org
We should never estimate the output of a semijoin to be more rows than
we estimate for an inner join with the same input rels and join condition;
it's obviously impossible for that to happen. However, given the
relatively poor quality of our semijoin selectivity estimates ---
particularly, but not only, in cases where we punt and return a default
estimate --- we did often deliver such estimates. To improve matters,
calculate both estimates inside eqjoinsel() and take the smaller one.
The bulk of this patch is just mechanical refactoring to avoid repetitive
information lookup when we call both eqjoinsel_semi and eqjoinsel_inner.
The actual new behavior is just
selec = Min(selec, inner_rel->rows * selec_inner);
which looks a bit odd but is correct because of our different definitions
for inner and semi join selectivity.
There is one ensuing plan change in the regression tests, but it looks
reasonable enough (and checking the actual row counts shows that the
estimate moved closer to reality, not further away).
Per bug #15160 from Alexey Ermakov. Although this is arguably a bug fix,
I won't risk destabilizing plan choices in stable branches by
back-patching.
Tom Lane, reviewed by Melanie Plageman
Discussion: https://postgr.es/m/152395805004.19366.3107109716821067806@wrigleys.postgresql.org
Users of the WaitEventSet and WaitLatch() APIs can now choose between
asking for WL_POSTMASTER_DEATH and then handling it explicitly, or asking
for WL_EXIT_ON_PM_DEATH to trigger immediate exit on postmaster death.
This reduces code duplication, since almost all callers want the latter.
Repair all code that was previously ignoring postmaster death completely,
or requesting the event but ignoring it, or requesting the event but then
doing an unconditional PostmasterIsAlive() call every time through its
event loop (which is an expensive syscall on platforms for which we don't
have USE_POSTMASTER_DEATH_SIGNAL support).
Assert that callers of WaitLatchXXX() under the postmaster remember to
ask for either WL_POSTMASTER_DEATH or WL_EXIT_ON_PM_DEATH, to prevent
future bugs.
The only process that doesn't handle postmaster death is syslogger. It
waits until all backends holding the write end of the syslog pipe
(including the postmaster) have closed it by exiting, to be sure to
capture any parting messages. By using the WaitEventSet API directly
it avoids the new assertion, and as a by-product it may be slightly
more efficient on platforms that have epoll().
Author: Thomas Munro
Reviewed-by: Kyotaro Horiguchi, Heikki Linnakangas, Tom Lane
Discussion: https://postgr.es/m/CAEepm%3D1TCviRykkUb69ppWLr_V697rzd1j3eZsRMmbXvETfqbQ%40mail.gmail.com,
https://postgr.es/m/CAEepm=2LqHzizbe7muD7-2yHUbTOoF7Q+qkSD5Q41kuhttRTwA@mail.gmail.com
populate_recordset_worker() failed to consider the possibility that the
supplied JSON data contains no rows, so that update_cached_tupdesc never
got called. This led to a null-pointer dereference since commit 9a5e8ed28;
before that it led to a bogus "set-valued function called in context that
cannot accept a set" error. Fix by forcing the update to happen.
Per bug #15514. Back-patch to v11 as 9a5e8ed28 was. (If we were excited
about the bogus error, we could perhaps go back further, but it'd take more
work to figure out how to fix it in older branches. Given the lack of
field complaints about that aspect, I'm not excited.)
Discussion: https://postgr.es/m/15514-59d5b4c4065b178b@postgresql.org
This commit continues the code improvements started by commit
12788ae49e19. With this commit, state machine transitions are better
contained in the routine that was called doCustom() and is now called
advanceConnectionState -- the resulting code is easier to reason about,
since there are no state changes occuring in the outer layer.
This change is prompted by future patches to add more features to
pgbench, which will need to effect some more surgery to this code.
Fabien's original had all the machine state changes inside one routine,
but I (Álvaro) thought that a subroutine to handle command execution is
more straightforward to review, so this commit does not match Fabien's
submission closely. If something is broken, it's probably my fault.
Author: Fabien Coelho, Álvaro Herrera
Reviewed-by: Kirk Jamison
Discussion: https://postgr.es/m/alpine.DEB.2.21.1808111104320.1705@lancre
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
The dedicated private buffer to store records is used only for these
crossing a page boundary since 285bd0ac, but its description did not
match completely the reality.
Reported-by: Andrey Lepikhov
Author: Michael Paquier
Discussion: https://postgr.es/m/49518b48-2036-5e43-1818-0f594e375e76@postgrespro.ru
As already explained in configure.in, using the OpenSSL version number
to detect presence of functions doesn't work, because LibreSSL reports
incompatible version numbers. Fortunately, the functions we need here
are actually macros, so we can just test for them directly.
Two issues have been spotted and get fixed here:
- When checking for corrupted files, make sure that pg_verify_checksums
complains about the correct file. In order to make the logic more
robust, all files created are immediately removed once checks on them
are done. The error message generated by pg_verify_checksums also now
includes the file name it sees as corrupted.
- Before running corruption-related tests, empty files are generated
which used names mapping with the corrupted files, potentially leading
to conflicts. So use different set of names for both.
Author: Michael Banck
Discussion: https://postgr.es/m/20181119181119.GC23740@nighthawk.caipicrew.dd-dns.de
Per POSIX, WIFSIGNALED and related macros are provided by <sys/wait.h>.
Apparently on Linux they're also pulled in by some other inclusions,
but BSD-ish systems are pickier. Fixes portability issue in ffa4cbd62.
Per buildfarm.
Previously, any program launched by COPY TO/FROM PROGRAM inherited the
server's setting of SIGPIPE handling, i.e. SIG_IGN. Hence, if we were
doing COPY FROM PROGRAM and closed the pipe early, the child process
would see EPIPE on its output file and typically would treat that as
a fatal error, in turn causing the COPY to report error. Similarly,
one could get a failure report from a query that didn't read all of
the output from a contrib/file_fdw foreign table that uses file_fdw's
PROGRAM option.
To fix, ensure that child programs inherit SIG_DFL not SIG_IGN processing
of SIGPIPE. This seems like an all-around better situation since if
the called program wants some non-default treatment of SIGPIPE, it would
expect to have to set that up for itself. Then in COPY, if it's COPY
FROM PROGRAM and we stop reading short of detecting EOF, treat a SIGPIPE
exit from the called program as a non-error condition. This still allows
us to report an error for any case where the called program gets SIGPIPE
on some other file descriptor.
As coded, we won't report a SIGPIPE if we stop reading as a result of
seeing an in-band EOF marker (e.g. COPY BINARY EOF marker). It's
somewhat debatable whether we should complain if the called program
continues to transmit data after an EOF marker. However, it seems like
we should avoid throwing error in any questionable cases, especially in a
back-patched fix, and anyway it would take additional code to make such
an error get reported consistently.
Back-patch to v10. We could go further back, since COPY FROM PROGRAM
has been around awhile, but AFAICS the only way to reach this situation
using core or contrib is via file_fdw, which has only supported PROGRAM
sources since v10. The COPY statement per se has no feature whereby
it'd stop reading without having hit EOF or an error already. Therefore,
I don't see any upside to back-patching further that'd outweigh the
risk of complaints about behavioral change.
Per bug #15449 from Eric Cyr.
Patch by me, review by Etsuro Fujita and Kyotaro Horiguchi
Discussion: https://postgr.es/m/15449-1cf737dd5929450e@postgresql.org
In \d and \z, instead of conflating partitioned tables and indexes with
plain ones, set the "type" column and table title differently to make
the distinction obvious. A simple ease-of-use improvement.
Author: Pavel Stehule, Michaël Paquier, Álvaro Herrera
Reviewed-by: Amit Langote
Discussion: https://postgr.es/m/CAFj8pRDMWPgijpt_vPj1t702PgLG4Ls2NCf+rEcb+qGPpossmg@mail.gmail.com
When hostaddr is given, the actual IP address that psql is connected to
can be totally unexpected for the given host. The more verbose output
we now generate makes things clearer. Since the "host" and "hostaddr"
parts of the conninfo could come from different sources (say, one of
them is in the service specification or a URI-style conninfo and the
other is not), this is not as silly as it may first appear. This is
also definitely useful if the hostname resolves to multiple addresses.
Author: Fabien Coelho
Reviewed-by: Pavel Stehule, Arthur Zakirov
Discussion: https://postgr.es/m/alpine.DEB.2.21.1810261532380.27686@lancrehttps://postgr.es/m/alpine.DEB.2.21.1808201323020.13832@lancre
The 'partoids' list which was constructed by the previous version
of this code was necessarily identical to 'inhoids'. There's no
point to duplicating the list, so avoid that. Instead, construct
the array representation directly from the original 'inhoids' list.
Also, use an array rather than a list for 'boundspecs'. We know
exactly how many items we need to store, so there's really no
reason to use a list. Using an array instead reduces the number
of memory allocations we perform.
Patch by me, reviewed by Michael Paquier and Amit Langote, the
latter of whom also helped with rebasing.
This didn't actually work: COPY would fail to flush the right files, and
instead would try to flush a non-existing file, causing the whole
transaction to fail.
Cope by raising an error as soon as the command is sent instead, to
avoid a nasty later surprise. Of course, it would be much better to
make it work, but we don't have a patch for that yet, and we don't know
if we'll want to backpatch one when we do.
Reported-by: Tomas Vondra
Author: David Rowley
Reviewed-by: Amit Langote, Steve Singer, Tomas Vondra
On some operating systems, it doesn't make sense to retry fsync(),
because dirty data cached by the kernel may have been dropped on
write-back failure. In that case the only remaining copy of the
data is in the WAL. A subsequent fsync() could appear to succeed,
but not have flushed the data. That means that a future checkpoint
could apparently complete successfully but have lost data.
Therefore, violently prevent any future checkpoint attempts by
panicking on the first fsync() failure. Note that we already
did the same for WAL data; this change extends that behavior to
non-temporary data files.
Provide a GUC data_sync_retry to control this new behavior, for
users of operating systems that don't eject dirty data, and possibly
forensic/testing uses. If it is set to on and the write-back error
was transient, a later checkpoint might genuinely succeed (on a
system that does not throw away buffers on failure); if the error is
permanent, later checkpoints will continue to fail. The GUC defaults
to off, meaning that we panic.
Back-patch to all supported releases.
There is still a narrow window for error-loss on some operating
systems: if the file is closed and later reopened and a write-back
error occurs in the intervening time, but the inode has the bad
luck to be evicted due to memory pressure before we reopen, we could
miss the error. A later patch will address that with a scheme
for keeping files with dirty data open at all times, but we judge
that to be too complicated to back-patch.
Author: Craig Ringer, with some adjustments by Thomas Munro
Reported-by: Craig Ringer
Reviewed-by: Robert Haas, Thomas Munro, Andres Freund
Discussion: https://postgr.es/m/20180427222842.in2e4mibx45zdth5%40alap3.anarazel.de
If fsync() fails, md.c must keep the request in its bitmap, so that
future attempts will try again.
Back-patch to all supported releases.
Author: Thomas Munro
Reviewed-by: Amit Kapila
Reported-by: Andrew Gierth
Discussion: https://postgr.es/m/87y3i1ia4w.fsf%40news-spur.riddles.org.uk
When reading a WAL record, its contents are copied into an intermediate
buffer. However, doing so is not necessary if the record fits fully
into the current page, saving one memcpy for each such record. The
allocation handling of the intermediate buffer is also now done only
when a record crosses a page boundary, shaving some extra cycles when
reading a WAL record.
Author: Andrey Lepikhov
Reviewed-by: Kyotaro Horiguchi, Heikki Linnakangas
Discussion: https://postgr.es/m/c2ea54dd-a1d3-80eb-ddbf-7e6f258e615e@postgrespro.ru
wcsrtombs (called through wchar2char from common functions like lower,
upper, etc.) uses various optimizations that may look like access to
uninitialized data, triggering valgrind reports.
For example AVX2 instructions load data in 256-bit chunks, and gconv
does something similar with 32-bit chunks. This is faster than accessing
the bytes one by one, and the uninitialized part of the buffer is not
actually used. So suppress the bogus reports.
The exact stack depends on possible optimizations - it might be AVX, SSE
(as in the report by Aleksander Alekseev) or something else. Hence the
last frame is wildcarded, to deal with this.
Backpatch all the way back to 9.4.
Author: Tomas Vondra
Discussion: https://www.postgresql.org/message-id/flat/90ac0452-e907-e7a4-b3c8-15bd33780e62%402ndquadrant.com
Discussion: https://www.postgresql.org/message-id/20180220150838.GD18315@e733.localdomain
Setting them to SIG_IGN seems unlikely to have any beneficial effect
on that platform, and given the signal numbering collision with SIGABRT,
it could easily have bad effects.
Given the lack of field complaints that can be traced to this, I don't
presently feel a need to back-patch.
Discussion: https://postgr.es/m/5627.1542477392@sss.pgh.pa.us
For reasons lost in the mists of time, most postmaster child processes
reset SIGTTIN/SIGTTOU signal handling to SIG_DFL, with the major exception
that backend sessions do not. It seems like a pretty bad idea for any
postmaster children to do that: if stderr is connected to the terminal,
and the user has put the postmaster in background, any log output would
result in the child process freezing up. Hence, switch them all to
doing what backends do, ie, nothing. This allows them to inherit the
postmaster's SIG_IGN setting. On the other hand, manually-launched
processes such as standalone backends will have default processing,
which seems fine.
In passing, also remove useless resets of SIGCONT and SIGWINCH signal
processing. Perhaps the postmaster once changed those to something
besides SIG_DFL, but it doesn't now, so these are just wasted (and
confusing) syscalls.
Basically, this propagates the changes made in commit 8e2998d8a from
backends to other postmaster children. Probably the only reason these
calls now exist elsewhere is that I missed changing pgstat.c along with
postgres.c at the time.
Given the lack of field complaints that can be traced to this, I don't
presently feel a need to back-patch.
Discussion: https://postgr.es/m/5627.1542477392@sss.pgh.pa.us
This commit completes the work prepared in 1a0586de36, splitting the
old TupleTableSlot implementation (which could store buffer, heap,
minimal and virtual slots) into four different slot types. As
described in the aforementioned commit, this is done with the goal of
making tuple table slots extensible, to allow for pluggable table
access methods.
To achieve runtime extensibility for TupleTableSlots, operations on
slots that can differ between types of slots are performed using the
TupleTableSlotOps struct provided at slot creation time. That
includes information from the size of TupleTableSlot struct to be
allocated, initialization, deforming etc. See the struct's definition
for more detailed information about callbacks TupleTableSlotOps.
I decided to rename TTSOpsBufferTuple to TTSOpsBufferHeapTuple and
ExecCopySlotTuple to ExecCopySlotHeapTuple, as that seems more
consistent with other naming introduced in recent patches.
There's plenty optimization potential in the slot implementation, but
according to benchmarking the state after this commit has similar
performance characteristics to before this set of changes, which seems
sufficient.
There's a few changes in execReplication.c that currently need to poke
through the slot abstraction, that'll be repaired once the pluggable
storage patchset provides the necessary infrastructure.
Author: Andres Freund and Ashutosh Bapat, with changes by Amit Khandekar
Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
This becomes useful when used to retry a transaction after a
serialization error or deadlock abort. (We don't yet have that feature,
but this is preparation for it.)
While at it, use separate random state for thread administratrivia such
as deciding which script to run, how long to delay for throttling, or
whether to log a message when sampling; this not only makes these tasks
independent of each other, but makes the actual thread run
deterministic.
Author: Marina Polyakova
Reviewed-by: Fabien Coelho
Discussion: https://postgr.es/m/72a0d590d6ba06f242d75c2e641820ec@postgrespro.ru
This speeds up write operations (INSERT, UPDATE, DELETE, COPY, as well
as the future MERGE) on partitioned tables.
This changes the setup for tuple routing so that it does far less work
during the initial setup and pushes more work out to when partitions
receive tuples. PartitionDispatchData structs for sub-partitioned
tables are only created when a tuple gets routed through it. The
possibly large arrays in the PartitionTupleRouting struct have largely
been removed. The partitions[] array remains but now never contains any
NULL gaps. Previously the NULLs had to be skipped during
ExecCleanupTupleRouting(), which could add a large overhead to the
cleanup when the number of partitions was large. The partitions[] array
is allocated small to start with and only enlarged when we route tuples
to enough partitions that it runs out of space. This allows us to keep
simple single-row partition INSERTs running quickly. Redesign
The arrays in PartitionTupleRouting which stored the tuple translation maps
have now been removed. These have been moved out into a
PartitionRoutingInfo struct which is an additional field in ResultRelInfo.
The find_all_inheritors() call still remains by far the slowest part of
ExecSetupPartitionTupleRouting(). This commit just removes the other slow
parts.
In passing also rename the tuple translation maps from being ParentToChild
and ChildToParent to being RootToPartition and PartitionToRoot. The old
names mislead you into thinking that a partition of some sub-partitioned
table would translate to the rowtype of the sub-partitioned table rather
than the root partitioned table.
Authors: David Rowley and Amit Langote, heavily revised by Álvaro Herrera
Testing help from Jesper Pedersen and Kato Sho.
Discussion: https://postgr.es/m/CAKJS1f_1RJyFquuCKRFHTdcXqoPX-PYqAd7nz=GVBwvGh4a6xA@mail.gmail.com