That avoids having to care about the lifetime of the
HeapTupleHeaderData passed to ExecStore[Buffer]HeapTuple(). That
doesn't make a huge difference for a plain HeapTupleTableSlot, but for
BufferHeapTupleTableSlot it can be a significant advantage, avoiding
the need to materialize slots where it's inconvenient to provide a
HeapTupleData with appropriate lifetime to point to the on-disk tuple.
It's quite possible that we'll want to add support functions for
constructing HeapTuples using that embedded HeapTupleData, but for now
callers do so themselves.
Author: Andres Freund
Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
This allows to avoid an unnecessary pin/unpin cycle when storing a
tuple in an already pinned buffer into a slot, when the pin isn't
further needed at the call site.
Only a single caller for now (to ensure coverage), but upcoming
patches will increase use of the new function.
Author: Andres Freund
Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
Previously, this function acquired locks in the order using
find_all_inheritors(), which locks the children of each table that it
processes in ascending OID order, and which processes the inheritance
hierarchy as a whole in a breadth-first fashion. Now, it processes
the inheritance hierarchy in a depth-first fashion, and at each level
it proceeds in the order in which tables appear in the PartitionDesc.
If table inheritance rather than table partitioning is used, the old
order is preserved.
This change moves the locking of any given partition much closer to
the code that actually expands that partition. This seems essential
if we ever want to allow concurrent DDL to add or remove partitions,
because if the set of partitions can change, we must use the same data
to decide which partitions to lock as we do to decide which partitions
to expand; otherwise, we might expand a partition that we haven't
locked. It should hopefully also facilitate efforts to postpone
inheritance expansion or locking for performance reasons, because
there's really no way to postpone locking some partitions if
we're blindly locking them all using find_all_inheritors().
The only downside of this change which is known to me is that it
further deviates from the principle that we should always lock the
inheritance hierarchy in find_all_inheritors() order to avoid deadlock
risk. However, we've already crossed that bridge in commit
9eefba181f7782d27d85d7e94e6028371e7ab2d7 and there are futher patches
pending that make similar changes, so this isn't really giving up
anything that we haven't surrendered already -- and it seems entirely
worth it, given the performance benefits some of those changes seem
likely to bring.
Patch by me; thanks to David Rowley for discussion of these issues.
Discussion: http://postgr.es/m/CAKJS1f_eEYVEq5tM8sm1k-HOwG0AyCPwX54XG9x4w0zy_N4Q_Q@mail.gmail.com
Discussion: http://postgr.es/m/CA+TgmoZUwPf_uanjF==gTGBMJrn8uCq52XYvAEorNkLrUdoawg@mail.gmail.com
_bt_getstackbuf() is called at exactly two points following commit
efada2b8e92 (one call site is concerned with page splits, while the
other is concerned with page deletion). The parent buffer returned by
_bt_getstackbuf() is write-locked in both cases. Remove the 'access'
argument and make _bt_getstackbuf() assume that callers require a
write-lock.
Commit efada2b8e92, which made the nbtree page deletion algorithm more
robust, removed _bt_getstackbuf() calls from _bt_pagedel(). It failed
to update a comment that referenced the earlier approach. Update the
comment to explain that the _bt_getstackbuf() page deletion call site
mirrors the only other remaining _bt_getstackbuf() call site, which is
reached during page splits.
Remove some unnecessary, legacy-looking use of the PROCEDURAL keyword
before LANGUAGE. We mostly don't use this anymore, so some of these
look a bit old.
There is still some use in pg_dump, which is harder to remove because
it's baked into the archive format, so I'm not touching that.
Discussion: https://www.postgresql.org/message-id/2330919b-62d9-29ac-8de3-58c024fdcb96@2ndquadrant.com
When preparing a transaction in two-phase commit, a dummy PGPROC entry
holding the GID used for the transaction is registered, which gets
released once COMMIT PREPARED is run. Prior releasing its shared memory
state, all the locks taken in the prepared transaction are released
using a dedicated set of callbacks (pgstat and multixact having similar
callbacks), which may cause the locks to be released before the GID is
set free.
Hence, there is a small window where lock conflicts could happen, for
example:
- Transaction A releases its locks, still holding its GID in shared
memory.
- Transaction B held a lock which conflicted with locks of transaction
A.
- Transaction B continues its processing, reusing the same GID as
transaction A.
- Transaction B fails because of a conflicting GID, already in use by
transaction A.
This commit changes the shared memory state release so as post-commit
callbacks and predicate lock cleanup happen consistently with the shared
memory state cleanup for the dummy PGPROC entry. The race window is
small and 2PC had this issue from the start, so no backpatch is done.
On top if that fixes discussed involved ABI breakages, which are not
welcome in stable branches.
Reported-by: Oleksii Kliukin, Ildar Musin
Diagnosed-by: Oleksii Kliukin, Ildar Musin
Author: Michael Paquier
Reviewed-by: Masahiko Sawada, Oleksii Kliukin
Discussion: https://postgr.es/m/BF9B38A4-2BFF-46E8-BA87-A2D00A8047A6@hintbits.com
Commit 16be2fd1 introduced the flag DSA_ALLOC_NO_OOM to control whether
the DSA allocator would raise an error or return InvalidDsaPointer on
failure to allocate. One edge case was not handled correctly: if we
fail to allocate an internal "span" object for a large allocation, we
would always return InvalidDsaPointer regardless of the flag; a caller
not expecting that could then dereference a null pointer.
This is a plausible explanation for a one-off report of a segfault.
Remove a redundant pair of braces so that all three stanzas that handle
DSA_ALLOC_NO_OOM match in style, for visual consistency.
While fixing inconsistencies, if FreePageManagerGet() can't supply the
pages that our book-keeping says it should be able to supply, then we
should always report a FATAL error. Previously we treated that as a
regular allocation failure in one code path, but as a FATAL condition
in another.
Back-patch to 10, where dsa.c landed.
Author: Thomas Munro
Reported-by: Jakub Glapa
Discussion: https://postgr.es/m/CAEepm=2oPqXxyWQ-1o60tpOLrwkw=VpgNXqqF1VN2EyO9zKGQw@mail.gmail.com
The Bison documentation clearly states that a semicolon is required
after every grammar rule, and our scripts that generate ecpg's
grammar from the backend's implicitly assumed this is true. But it
turns out that only ancient versions of Bison actually enforce that.
There have been a couple of rules without trailing semicolons in
gram.y for some time, and as a consequence, ecpg's grammar was faulty
and produced wrong output for the affected statements.
To fix, add the missing semis, and add some cross-checks to ecpg's
scripts so that they'll bleat if we mess this up again.
The cases that were broken were:
* "SET variable = DEFAULT" (but not "SET variable TO DEFAULT"),
as well as allied syntaxes such as ALTER SYSTEM SET ... DEFAULT.
These produced syntactically invalid output that the server
would reject.
* Multiple type names in DROP TYPE/DOMAIN commands. Only the
first type name would be listed in the emitted command.
Per report from Daisuke Higuchi. Back-patch to all supported versions.
Discussion: https://postgr.es/m/1803D792815FC24D871C00D17AE95905DB51CE@g01jpexmbkw24
Previously, we tolerated EBADF as a way for the operating system to
indicate that it doesn't support fsync() on a directory. Tolerate
EINVAL too, for older versions of Linux CIFS.
Bug #15636. Back-patch all the way.
Reported-by: John Klann
Discussion: https://postgr.es/m/15636-d380890dafd78fc6@postgresql.org
One unintended consequence of commit 9ccdd7f6 was that Windows WSL
users started getting a panic whenever we tried to initiate data
flushing with sync_file_range(), because WSL does not implement that
system call. Previously, they got a stream of periodic warnings,
which was also undesirable but at least ignorable.
Prevent the panic by handling ENOSYS specially and skipping the panic
promotion with data_sync_elevel(). Also suppress future attempts
after the first such failure so that the pre-existing problem of
noisy warnings is improved.
Back-patch to 9.6 (older branches were not affected in this way by
9ccdd7f6).
Author: Thomas Munro and James Sewell
Tested-by: James Sewell
Reported-by: Bruce Klein
Discussion: https://postgr.es/m/CA+mCpegfOUph2U4ZADtQT16dfbkjjYNJL1bSTWErsazaFjQW9A@mail.gmail.com
The current set of TAP tests for two-phase transactions include some
coverage for post-commit callbacks of multixact, but it lacked tests for
testing the recovery of those callbacks. This commit adds some tests
with soft and hard restarts in this case, using transactions which
include DDLs.
Author: Michael Paquier
Reviewed-by: Oleksii Kliukin
Discussion: https://postgr.es/m/20190221055431.GO15532@paquier.xyz
In the case where inheritance_planner() finds that every table has
been excluded by constraints, it thought it could get away with
making a plan consisting of just a dummy Result node. While certainly
there's no updating or deleting to be done, this had two user-visible
problems: the plan did not report the correct set of output columns
when a RETURNING clause was present, and if there were any
statement-level triggers that should be fired, it didn't fire them.
Hence, rather than only generating the dummy Result, we need to
stick a valid ModifyTable node on top, which requires a tad more
effort here.
It's been broken this way for as long as inheritance_planner() has
known about deleting excluded subplans at all (cf commit 635d42e9c),
so back-patch to all supported branches.
Amit Langote and Tom Lane, per a report from Petr Fedorov.
Discussion: https://postgr.es/m/5da6f0f0-1364-1876-6978-907678f89a3e@phystech.edu
New code introduced in 050710b36964dee7e1b2bf6b5ef00041fd5d2787. The
lack of const is not currently a compiler warning, but it's nice to
have for consistency with surrounding code.
There isn't any good reason for this test to be a self-join rather
than a join between separate tables, except that it saved a couple
of SQL commands for setup. A proposed patch to optimize away
self-joins breaks the test, so adjust it to avoid that happening.
Discussion: https://postgr.es/m/64486b0b-0404-e39e-322d-0801154901f3@postgrespro.ru
It seems to make more sense for this to be in selfuncs.c, since it's
largely a statistical-estimation thing, and it's related to other
functions like estimate_hash_bucket_stats that are there.
While at it, change the result type from Size to double. Perhaps at one
point it was impossible for the result to overflow an integer, but
I've got no confidence in that proposition anymore. Nothing's actually
done with the result except to compare it to a work_mem-based limit,
so as long as we don't get an overflow on the way to that comparison,
things should be fine even with very large dNumGroups.
Code movement proposed by Antonin Houska, type change by me
Discussion: https://postgr.es/m/25767.1549359615@localhost
Change pg_stat_ssl so that an unprivileged user can only see their own
rows; other rows will be all null. This makes the behavior consistent
with pg_stat_activity, where information about where the connection
came from is also restricted.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/flat/63117976-d02c-c8e2-3aef-caa31a5ab8d3%402ndquadrant.com
Don't expand inputfile and outputfile to absolute paths globally, just
where needed. In particular, pass them as is to the file name
arguments of the diff command, so that we don't see the full absolute
path in the diff header, which makes the diff unnecessarily verbose
and harder to read.
Discussion: https://www.postgresql.org/message-id/0cc82900-c457-1cee-3ab2-7b0f5d215061@2ndquadrant.com
This is similar in spirit to the existing partbounds.c file in the
same directory, except that there's a lot less code in the new file
created by this commit. Pending work in this area proposes to add a
bunch more code related to PartitionDescs, though, and this will give
us a good place to put it.
Discussion: http://postgr.es/m/CA+TgmoZUwPf_uanjF==gTGBMJrn8uCq52XYvAEorNkLrUdoawg@mail.gmail.com
Instead of locking all partitions to which we might route a tuple at
executor startup, just lock them as we use them. In some cases such a
partition might get locked at executor startup anyway because it
appears in the query's range table for some other reason, but in other
cases this is a bit savings.
This changes the order in which partitions are locked in some cases,
which might conceivably create deadlock hazards that don't exist
today, but per discussion, it seems like such cases should be rare
enough that we can neglect them in favor of improving performance.
David Rowley, reviewed and tested by Tomas Vondra, Sho Kato, John
Naylor, Tom Lane, and me.
Discussion: http://postgr.es/m/CAKJS1f-=FnMqmQP6qitkD+xEddxw22ySLP-0xFk3JAqUX2yfMw@mail.gmail.com
In previous releases, the input file of dbtoepub was postgres.xml, and
dbtoepub knows to derive the output file name postgres.epub from that
automatically. But now the intput file is postgres.sgml (since
postgres.sgml is itself an XML file and we no longer need the
intermediate postgres.xml file), but dbtoepub doesn't know how to deal
with the .sgml suffix, so the automatically derived output file name
becomes postgres.sgml.epub. Fix by adding an explicit -o option.
Check ec_relids before bothering to iterate through the EC members.
On a perhaps extreme, but still real-world, query in which
match_eclasses_to_foreign_key_col() accounts for the bulk of the
planner's runtime, this saves nearly 40% of the runtime. It's a bit
of a stopgap fix, but it's simple enough to be back-patched to 9.6
where this code came in; so let's do that.
David Rowley
Discussion: https://postgr.es/m/6970.1545327857@sss.pgh.pa.us
On (rare) platforms where sizeof(bool) > 1, we need to use our own
bool, but imported c99 code (such as Ryu) may want to use bool values
as array subscripts, which elicits warnings if bool is defined as
char. Using unsigned char instead should work just as well for our
purposes, and avoid such warnings.
Per buildfarm members prariedog and locust.
PG type coercions are generally strict, ie a NULL input must produce
a NULL output (or, in domain cases, possibly an error). The planner's
understanding of that was a bit incomplete though, so improve it:
* Teach contain_nonstrict_functions() that CoerceViaIO can always be
considered strict. Previously it believed that only if the underlying
I/O functions were marked strict, which is often but not always true.
* Teach clause_is_strict_for() that CoerceViaIO, ArrayCoerceExpr,
ConvertRowtypeExpr, CoerceToDomain can all be considered strict.
Previously it knew nothing about any of them.
The main user-visible impact of this is that IS NOT NULL predicates
can be proven to hold from expressions involving casts in more cases
than before, allowing partial indexes with such predicates to be used
without extra pushups. This reduces the surprise factor for users,
who may well be used to ordinary (function-call-based) casts being
known to be strict.
Per a gripe from Samuel Williams. This doesn't rise to the level of
a bug, IMO, so no back-patch.
Discussion: https://postgr.es/m/27571.1550617881@sss.pgh.pa.us
The recursion in contain_nonstrict_functions_walker() was done wrong,
causing the strictness check to be bypassed for a parse node that
is the immediate input of an ArrayCoerceExpr node. This could allow,
for example, incorrect decisions about whether a strict SQL function
can be inlined.
I didn't add a regression test, because (a) the bug is so narrow
and (b) I couldn't think of a test case that wasn't dependent on a
large number of other behaviors, to the point where it would likely
soon rot to the point of not testing what it was intended to.
I broke this in commit c12d570fa, so back-patch to v11.
Discussion: https://postgr.es/m/27571.1550617881@sss.pgh.pa.us
pg_identify_object_as_address crashes when passed certain tuples from
inconsistent system catalogs. Make it more defensive.
Author: Álvaro Herrera
Reviewed-by: Michaël Paquier
Discussion: https://postgr.es/m/20190218202743.GA12392@alvherre.pgsql
In commit b0eaa4c51b, we have avoided the creation of FSM for small tables.
So the functions that use FSM to compute the free space can return zero for
such tables. This was previously also possible for the cases where the
vacuum has not been triggered to update FSM.
This commit updates the comments in code and documentation to reflect this
behavior.
Author: John Naylor
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/CACPNZCtba-3m1q3A8gxA_vxg=T7gQf7gMbpR4Ciy5LntY-j+0Q@mail.gmail.com
INSERT ... VALUES for a single VALUES row is implemented differently
from a multi-row VALUES list, which causes inconsistent behaviour in
the way that DEFAULT items are handled. In particular, when inserting
into an auto-updatable view on top of a table with a column default, a
DEFAULT item in a single VALUES row gets correctly replaced with the
table column's default, but for a multi-row VALUES list it is replaced
with NULL.
Fix this by allowing rewriteValuesRTE() to leave DEFAULT items in the
VALUES list untouched if the target relation is an auto-updatable view
and has no column default, deferring DEFAULT-expansion until the query
against the base relation is rewritten. For all other types of target
relation, including tables and trigger- and rule-updatable views, we
must continue to replace DEFAULT items with NULL in the absence of a
column default.
This is somewhat complicated by the fact that if an auto-updatable
view has DO ALSO rules attached, the VALUES lists for the product
queries need to be handled differently from the original query, since
the product queries need to act like rule-updatable views whereas the
original query has auto-updatable view semantics.
Back-patch to all supported versions.
Reported by Roger Curley (bug #15623). Patch by Amit Langote and me.
Discussion: https://postgr.es/m/15623-5d67a46788ec8b7f@postgresql.org
When building an initial slot snapshot, snapshots are marked with
historic MVCC snapshots as type with the marker field being set in
SnapBuildBuildSnapshot() but not overriden in SnapBuildInitialSnapshot().
Existing callers of SnapBuildBuildSnapshot() do not care about the type
of snapshot used, but extensions calling it actually may, as reported.
While on it, mark correctly the snapshot type when importing one. This
is cosmetic as the field is enforced to 0.
Author: Antonin Houska
Reviewed-by: Álvaro Herrera, Michael Paquier
Discussion: https://postgr.es/m/23215.1527665193@localhost
Backpatch-through: 9.4
Changes made by commit 02ddd49 mean that dumps made against pre version
12 instances are no longer comparable with those made against version 12
or later instances. This makes cross-version upgrade testing fail in the
buildfarm. Experimentation has shown that the error is cured if the
dumps are made when extra_float_digits is set to 0. Hence this patch
allows for it to be explicitly set rather than relying on pg_dump's
builtin default (3 in almost all cases). This feature might have other
uses, but should not normally be used.
Discussion: https://postgr.es/m/c76f7051-8fd3-ec10-7579-1f8842305b85@2ndQuadrant.com
So far ECPG programs had to treat binary data for bytea column as 'char' type.
But this meant converting from/to escaped format with PQunescapeBytea/
PQescapeBytea() and therefore forcing users to add unnecessary code and cost
for the conversion in runtime. By adding a dedicated datatype for bytea most of
this special handling is no longer needed.
Author: Matsumura-san ("Matsumura, Ryo" <matsumura.ryo@jp.fujitsu.com>)
Discussion: https://postgr.es/m/flat/03040DFF97E6E54E88D3BFEE5F5480F737A141F9@G01JPEXMBYT04
For the convenience of extensions, we previously only saved PathTargets
for grouped, window, and final relations in root->upper_targets[] in
grouping_planner(). To improve the convenience, save PathTargets for
distinct and ordered relations as well.
Author: Antonin Houska, with an additional change by me
Discussion: https://postgr.es/m/10994.1549559088@localhost
ee9e145 has fixed the tests of pg_basebackup for checksums a first time,
still one seek() call missed the shot. Also, the data written in files
to emulate corruptions was not actually writing zeros as the quoting
style was incorrect.
Backpatch the portion for pg_basebackup to v11 where these tests have
been introduced. The tests of pg_verify_checksums are new as of v12.
Author: Michael Banck
Discussion: https://postgr.es/m/1550153276.796.35.camel@credativ.de
Backpatch-through: 11
When reading a new page internally and depending on the way the WAL
reader facility gets used by plugins, the current implementation of the
WAL reader may finish by reading a block multiple times while it is not
actually necessary as the requested data length may be equal to what has
been already read. This can happen for any size, but is more likely to
happen at the end of a page. This can cause performance penalties in
plugins which rely on the block reads to be purely sequential, zlib not
liking backward reads for example. The new behavior also shaves some
cycles when doing recovery.
Author: Arthur Zakirov
Reviewed-by: Andrey Lepikhov, Michael Paquier, Grigory Smolkin
Discussion: https://postgr.es/m/2ddf4a32-517e-d6f4-d992-4a63b6035bfd@postgrespro.ru