The test crashes and burns quite badly, for some reason, but even if it
didn't it wouldn't work, since Windows doesn't let you rename a file
held by a running process.
StoreIndexTuple was a loop over index_getattr, which is O(N^2)
if the index columns are variable-width, and the performance
impact is already quite visible at ten columns. The obvious
move is to replace that with a call to index_deform_tuple ...
but that's *also* a loop over index_getattr. Improve it to
be essentially a clone of heap_deform_tuple.
(There are a few other places that loop over all index columns
with index_getattr, and perhaps should be changed likewise,
but most of them don't seem performance-critical. Anyway, the
rest would mostly only be interested in the index key columns,
which there aren't likely to be so many of. Wide index tuples
are a new thing with INCLUDE.)
Konstantin Knizhnik
Discussion: https://postgr.es/m/e06b2d27-04fc-5c0e-bb8c-ecd72aa24959@postgrespro.ru
Commit f092de05 added a test for pg_dumpall --exclude-database including
the wildcard pattern '*dump*' which matches some files in the source
directory. The test library on msys uses the shell which expands this
and thus the program gets incorrect arguments. This doesn't happen if
the pattern doesn't match any files, so here the pattern is set to
'*dump_test*' which is such a pattern.
Per buildfarm animal jacana.
Previously, rewriteTargetListIU() generated a list of attribute
numbers from the targetlist, which were passed to rewriteValuesRTE(),
which expected them to contain the same number of entries as there are
columns in the VALUES RTE, and to be in the same order. That was fine
when the target relation was a table, but for an updatable view it
could be broken in at least three different ways ---
rewriteTargetListIU() could insert additional targetlist entries for
view columns with defaults, the view columns could be in a different
order from the columns of the underlying base relation, and targetlist
entries could be merged together when assigning to elements of an
array or composite type. As a result, when recursing to the base
relation, the list of attribute numbers generated from the rewritten
targetlist could no longer be relied upon to match the columns of the
VALUES RTE. We got away with that prior to 41531e42d3 because it used
to always be the case that rewriteValuesRTE() did nothing for the
underlying base relation, since all DEFAULTS had already been replaced
when it was initially invoked for the view, but that was incorrect
because it failed to apply defaults from the base relation.
Fix this by examining the targetlist entries more carefully and
picking out just those that are simple Vars referencing the VALUES
RTE. That's sufficient for the purposes of rewriteValuesRTE(), which
is only responsible for dealing with DEFAULT items in the VALUES
RTE. Any DEFAULT item in the VALUES RTE that doesn't have a matching
simple-Var-assignment in the targetlist is an error which we complain
about, but in theory that ought to be impossible.
Additionally, move this code into rewriteValuesRTE() to give a clearer
separation of concerns between the 2 functions. There is no need for
rewriteTargetListIU() to know about the details of the VALUES RTE.
While at it, fix the comment for rewriteValuesRTE() which claimed that
it doesn't support array element and field assignments --- that hasn't
been true since a3c7a993d5 (9.6 and later).
Back-patch to all supported versions, with minor differences for the
pre-9.6 branches, which don't support array element and field
assignments to the same column in multi-row VALUES lists.
Reviewed by Amit Langote.
Discussion: https://postgr.es/m/15623-5d67a46788ec8b7f@postgresql.org
This changes the partition functions so as tables and indexes which are
not part of partition trees are handled the same way as what is done for
undefined objects and unsupported relkinds: pg_partition_tree() returns
no rows and pg_partition_root() returns a NULL result. Hence,
partitioned tables, partitioned indexes and relations whose flag
pg_class.relispartition is set are considered as valid objects to
process.
Previously, tables and indexes not included in a partition tree were
processed the same way as a partition or a partitioned table, which
caused the functions to return inconsistent results for inherited
tables, especially when inheriting from multiple tables.
Reported-by: Álvaro Herrera
Author: Amit Langote, Michael Paquier
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/20190228193203.GA26151@alvherre.pgsql
Future-proofing against a common mistake in attempts to optimize NOT IN.
We don't have such an optimization right now, but attempts to do so
are in the works, and some of 'em are buggy. Add a regression test case
covering the point.
David Rowley
Discussion: https://postgr.es/m/CAKJS1f90E9agVZryVyUpbHQbjTt5ExqS2Fsodmt5_A7E_cEyVA@mail.gmail.com
In particular, make it possible to prove/refute "x IS NULL" and
"x IS NOT NULL" predicates from a clause involving a ScalarArrayOpExpr
even when we are unable or unwilling to deconstruct the expression
into an AND/OR tree. This avoids a former unexpected degradation of
plan quality when the size of an ARRAY[] expression or array constant
exceeded the arbitrary MAX_SAOP_ARRAY_SIZE limit. For IS-NULL proofs,
we don't really care about the values of the individual array elements;
at most, we care whether there are any, and for some common cases we
needn't even know that.
The main user-visible effect of this is to let the optimizer recognize
applicability of partial indexes with "x IS NOT NULL" predicates to
queries with "x IN (array)" clauses in some cases where it previously
failed to recognize that. The structure of predtest.c is such that a
bunch of related proofs will now also succeed, but they're probably
much less useful in the wild.
James Coleman, reviewed by David Rowley
Discussion: https://postgr.es/m/CAAaqYe8yKSvzbyu8w-dThRs9aTFMwrFxn_BkTYeXgjqe3CbNjg@mail.gmail.com
It turns out that different getopt implementations spell the error for
missing arguments different ways. This test is of fairly marginal
value, so instead of trying to keep up with the different error
messages just remove the test.
For the upcoming pluggable table access methods it's quite
inconvenient to store tuples as HeapTuples, as that'd require
converting tuples from a their native format into HeapTuples. Instead
use slots to manage epq tuples.
To fit into that scheme, change the foreign data wrapper callback
RefetchForeignRow, to store the tuple in a slot. Insist on using the
caller provided slot, so it conveniently can be stored in the
corresponding EPQ slot. As there is no in core user of
RefetchForeignRow, that change was done blindly, but we plan to test
that soon.
To avoid duplicating that work for row locks, move row locks to just
directly use the EPQ slots - it previously temporarily stored tuples
in LockRowsState.lr_curtuples, but that doesn't seem beneficial, given
we'd possibly end up with a significant number of additional slots.
The behaviour of es_epqTupleSet[rti -1] is now checked by
es_epqTupleSlot[rti -1] != NULL, as that is distinguishable from a
slot containing an empty tuple.
Author: Andres Freund, Haribabu Kommi, Ashutosh Bapat
Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
This option functions similarly to pg_dump's --exclude-table option, but
for database names. The option can be given once, and the argument can
be a pattern including wildcard characters.
Author: Andrew Dunstan.
Reviewd-by: Fabien Coelho and Michael Paquier
Discussion: https://postgr.es/m/43a54a47-4aa7-c70e-9ca6-648f436dd6e6@2ndQuadrant.com
After commit b0eaa4c51b, we use a local map of pages to find the required
space for small relations. We do clear this map when we have found a block
with enough free space, when we extend the relation, or on transaction
abort so that it can be used next time. However, we miss to clear it when
we didn't find any pages to try from the map which leads to an assertion
failure when we later tried to use it after relation extension.
In the passing, I have improved some comments in this area.
Reported-by: Tom Lane based on buildfarm results
Author: Amit Kapila
Reviewed-by: John Naylor
Tested-by: Kuntal Ghosh
Discussion: https://postgr.es/m/32368.1551114120@sss.pgh.pa.us
The function was tweaked so as it returned one row full of NULLs when
working on an unsupported relkind or an undefined object as of cc53123,
and after discussion with Amit and Álvaro it looks more natural to make
it return no rows.
Author: Michael Paquier
Reviewed-by: Álvaro Herrera, Amit Langote
Discussion: https://postgr.es/m/20190227184808.GA17357@alvherre.pgsql
After 5408e233f0667478 it's not necessary to force materializing the
target slot when copying from one buffer slot to another. Previously
that was required because the HeapTupleData portion of the source slot
wasn't guaranteed to stay valid long enough, but now we can simply
copy that part into the destination slot's tupdata.
Author: Andres Freund
When backend functions were added to expose controldata via SQL,
reading of pg_control was consolidated under src/common so that
both frontend and backend could share the same code. That move
from frontend-only to shared frontend-backend failed to recognize
the risk (and coding standards violation) of using a bare open().
In particular, it risked leaking file descriptors if transient
errors occurred while reading the file. Fix that by using
OpenTransientFile() instead in the backend case, which is
purpose-built for this type of usage.
Since there have been no complaints from the field, and an intermittent
failure low risk, no backpatch. Hard failure would of course be bad, but
in that case these functions are probably the least of your worries.
Author: Joe Conway
Reviewed-By: Michael Paquier
Reported by: Michael Paquier
Discussion: https://postgr.es/m/20190227074728.GA15710@paquier.xyz
Commit f831d4acc changed what pg_dump emits for some empty fields: they
were output as empty strings before, NULL pointer afterwards. That
makes old pg_restore unable to work (crash) with such files, which is
unacceptable. Return to the original representation by explicitly
setting those struct members to "" where needed; remove some no longer
needed checks for NULL input.
We can declutter the code a little by returning to NULLs when we next
update the archive version, so add a note to remind us later.
Discussion: https://postgr.es/m/20190225074539.az6j3u464cvsoxh6@depesz.com
Reported-by: hubert depesz lubaczewski
Author: Dmitry Dolgov
We have forboth() and forthree() macros that simplify iterating
through several parallel lists, but not everyplace that could
reasonably use those was doing so. Also invent forfour() and
forfive() macros to do the same for four or five parallel lists,
and use those where applicable.
The immediate motivation for doing this is to reduce the number
of ad-hoc lnext() calls, to reduce the footprint of a WIP patch.
However, it seems like good cleanup and error-proofing anyway;
the places that were combining forthree() with a manually iterated
loop seem particularly illegible and bug-prone.
There was some speculation about restructuring related parsetree
representations to reduce the need for parallel list chasing of
this sort. Perhaps that's a win, or perhaps not, but in any case
it would be considerably more invasive than this patch; and it's
not particularly related to my immediate goal of improving the
List infrastructure. So I'll leave that question for another day.
Patch by me; thanks to David Rowley for review.
Discussion: https://postgr.es/m/11587.1550975080@sss.pgh.pa.us
There was a mix of old_slot/oldslot, new_slot/newslot. Since we've
changed everything from row to slot, we might as well take this
opportunity to clean this up.
Also update some more comments for the slot change.
ri_triggers.c spends a lot of space catering to a not-yet-implemented
MATCH PARTIAL option. An actual implementation would probably not use
the existing code structure anyway, so let's just simplify this for
now.
First, have ri_FetchConstraintInfo() check that riinfo->confmatchtype
is valid. Then we don't have to repeat that everywhere.
In the various referential action functions, we don't need to pay
attention to the match type at all right now, so remove all that code.
A future MATCH PARTIAL implementation would probably have some
conditions added to the present code, but it won't need an entirely
separate switch branch in each case.
In RI_FKey_fk_upd_check_required(), reorganize the code to make it
much simpler.
Reviewed-by: Corey Huinker <corey.huinker@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/0ccdd3e1-10b0-dd05-d8a7-183507c11eb1%402ndquadrant.com
Reflecting an updated parameter value requires a server restart, which
was not mentioned in the documentation and in postgresql.conf.sample.
Reported-by: Thomas Poty
Discussion: https://postgr.es/m/15659-0cd812f13027a2d8@postgresql.org
When using a libpq client linked with OpenSSL 1.0.1 or older to connect
to a backend linked with OpenSSL 1.0.2 or newer, the server would send
SCRAM-SHA-256-PLUS and SCRAM-SHA-256 as valid mechanisms for the SASL
exchange, and the client would choose SCRAM-SHA-256-PLUS even if it does
not support channel binding, leading to a confusing error. In this
case, what the client ought to do is switch to SCRAM-SHA-256 so as the
authentication can move on and succeed.
So for a SCRAM authentication over SSL, here are all the cases present
and how we deal with them using libpq:
1) Server supports channel binding, it sends SCRAM-SHA-256-PLUS and
SCRAM-SHA-256 as allowed mechanisms.
1-1) Client supports channel binding, chooses SCRAM-SHA-256-PLUS.
1-2) Client does not support channel binding, chooses SCRAM-SHA-256.
2) Server does not support channel binding, sends SCRAM-SHA-256 as
allowed mechanism.
2-1) Client supports channel binding, still it has no choice but to
choose SCRAM-SHA-256.
2-2) Client does not support channel binding, it chooses SCRAM-SHA-256.
In all these scenarios the connection should succeed, and the one which
was handled incorrectly prior this commit is 1-2), causing the
connection attempt to fail because client chose SCRAM-SHA-256-PLUS over
SCRAM-SHA-256.
Reported-by: Hugh Ranalli
Diagnosed-by: Peter Eisentraut
Author: Michael Paquier
Reviewed-by: Peter Eisentraut
Discussion: https://postgr.es/m/CAAhbUMO89SqUk-5mMY+OapgWf-twF2NA5sCucbHEzMfGbvcepA@mail.gmail.com
Backpatch-through: 11
After ff11e7f4b9ae Tom's compiler warns about accessing a potentially
uninitialized rInfo. That's not actually possible, but it's
understandable the compiler would get this wrong. NULL initialize too.
Reported-By: Tom Lane
Discussion: https://postgr.es/m/11199.1551285318@sss.pgh.pa.us
By default, the fallback_application_name for a physical walreceiver
is "walreceiver". This means that multiple standbys cannot be
distinguished easily on a primary, for example in pg_stat_activity or
synchronous_standby_names.
If cluster_name is set, use that for fallback_application_name in the
walreceiver. (If it's not set, it remains "walreceiver".) If someone
set cluster_name to identify their instance, we might as well use that
by default to identify the node remotely as well. It's still possible
to specify another application_name in primary_conninfo explicitly.
Reviewed-by: Euler Taveira <euler@timbira.com.br>
Discussion: https://www.postgresql.org/message-id/flat/1257eaee-4874-e791-e83a-46720c72cac7@2ndquadrant.com
In preparation for abstracting table storage, convert trigger.c to
track tuples in slots. Which also happens to make code calling
triggers simpler.
As the calling interface for triggers themselves is not changed in
this patch, HeapTuples still are extracted from the slot at that
time. But that's handled solely inside trigger.c, not visible to
callers. It's quite likely that we'll want to revise the external
trigger interface, but that's a separate large project.
As part of this work the slots used for old/new/return tuples are
moved from EState into ResultRelInfo, as different updated tables
might need different slots. The slots are now also now created
on-demand, which is good both from an efficiency POV, but also makes
the modifying code simpler.
Author: Andres Freund, Amit Khandekar and Ashutosh Bapat
Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
After the introduction of tuple table slots all table AMs need to
support returning the table oid of the tuple stored in a slot created
by said AM. It does not make sense to re-implement that in every AM,
therefore move handling of table OIDs into the TupleTableSlot
structure itself. It's possible that we, at a later date, might want
to get rid of HeapTupleData.t_tableOid entirely, but doing so before
the abstractions for table AMs are integrated turns out to be too
hard, so delay that for now.
Similarly, every AM needs to support the concept of a tuple
identifier (tid / item pointer) for its tuples. It's quite possible
that we'll generalize the exact form of a tid at a future point (to
allow for things like index organized tables), but for now many parts
of the code know about tids, so there's not much point in abstracting
tids away. Therefore also move into slot (rather than providing API to
set/get the tid associated with the tuple in a slot).
Once table AM includes insert/updating/deleting tuples, the
responsibility to set the correct tid after such an action will move
into that. After that change, code doing such modifications, should
not have to deal with HeapTuples directly anymore.
Author: Andres Freund, Haribabu Kommi and Ashutosh Bapat
Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
That avoids having to care about the lifetime of the
HeapTupleHeaderData passed to ExecStore[Buffer]HeapTuple(). That
doesn't make a huge difference for a plain HeapTupleTableSlot, but for
BufferHeapTupleTableSlot it can be a significant advantage, avoiding
the need to materialize slots where it's inconvenient to provide a
HeapTupleData with appropriate lifetime to point to the on-disk tuple.
It's quite possible that we'll want to add support functions for
constructing HeapTuples using that embedded HeapTupleData, but for now
callers do so themselves.
Author: Andres Freund
Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
This allows to avoid an unnecessary pin/unpin cycle when storing a
tuple in an already pinned buffer into a slot, when the pin isn't
further needed at the call site.
Only a single caller for now (to ensure coverage), but upcoming
patches will increase use of the new function.
Author: Andres Freund
Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
Previously, this function acquired locks in the order using
find_all_inheritors(), which locks the children of each table that it
processes in ascending OID order, and which processes the inheritance
hierarchy as a whole in a breadth-first fashion. Now, it processes
the inheritance hierarchy in a depth-first fashion, and at each level
it proceeds in the order in which tables appear in the PartitionDesc.
If table inheritance rather than table partitioning is used, the old
order is preserved.
This change moves the locking of any given partition much closer to
the code that actually expands that partition. This seems essential
if we ever want to allow concurrent DDL to add or remove partitions,
because if the set of partitions can change, we must use the same data
to decide which partitions to lock as we do to decide which partitions
to expand; otherwise, we might expand a partition that we haven't
locked. It should hopefully also facilitate efforts to postpone
inheritance expansion or locking for performance reasons, because
there's really no way to postpone locking some partitions if
we're blindly locking them all using find_all_inheritors().
The only downside of this change which is known to me is that it
further deviates from the principle that we should always lock the
inheritance hierarchy in find_all_inheritors() order to avoid deadlock
risk. However, we've already crossed that bridge in commit
9eefba181f7782d27d85d7e94e6028371e7ab2d7 and there are futher patches
pending that make similar changes, so this isn't really giving up
anything that we haven't surrendered already -- and it seems entirely
worth it, given the performance benefits some of those changes seem
likely to bring.
Patch by me; thanks to David Rowley for discussion of these issues.
Discussion: http://postgr.es/m/CAKJS1f_eEYVEq5tM8sm1k-HOwG0AyCPwX54XG9x4w0zy_N4Q_Q@mail.gmail.com
Discussion: http://postgr.es/m/CA+TgmoZUwPf_uanjF==gTGBMJrn8uCq52XYvAEorNkLrUdoawg@mail.gmail.com
_bt_getstackbuf() is called at exactly two points following commit
efada2b8e92 (one call site is concerned with page splits, while the
other is concerned with page deletion). The parent buffer returned by
_bt_getstackbuf() is write-locked in both cases. Remove the 'access'
argument and make _bt_getstackbuf() assume that callers require a
write-lock.
Commit efada2b8e92, which made the nbtree page deletion algorithm more
robust, removed _bt_getstackbuf() calls from _bt_pagedel(). It failed
to update a comment that referenced the earlier approach. Update the
comment to explain that the _bt_getstackbuf() page deletion call site
mirrors the only other remaining _bt_getstackbuf() call site, which is
reached during page splits.
Remove some unnecessary, legacy-looking use of the PROCEDURAL keyword
before LANGUAGE. We mostly don't use this anymore, so some of these
look a bit old.
There is still some use in pg_dump, which is harder to remove because
it's baked into the archive format, so I'm not touching that.
Discussion: https://www.postgresql.org/message-id/2330919b-62d9-29ac-8de3-58c024fdcb96@2ndquadrant.com
When preparing a transaction in two-phase commit, a dummy PGPROC entry
holding the GID used for the transaction is registered, which gets
released once COMMIT PREPARED is run. Prior releasing its shared memory
state, all the locks taken in the prepared transaction are released
using a dedicated set of callbacks (pgstat and multixact having similar
callbacks), which may cause the locks to be released before the GID is
set free.
Hence, there is a small window where lock conflicts could happen, for
example:
- Transaction A releases its locks, still holding its GID in shared
memory.
- Transaction B held a lock which conflicted with locks of transaction
A.
- Transaction B continues its processing, reusing the same GID as
transaction A.
- Transaction B fails because of a conflicting GID, already in use by
transaction A.
This commit changes the shared memory state release so as post-commit
callbacks and predicate lock cleanup happen consistently with the shared
memory state cleanup for the dummy PGPROC entry. The race window is
small and 2PC had this issue from the start, so no backpatch is done.
On top if that fixes discussed involved ABI breakages, which are not
welcome in stable branches.
Reported-by: Oleksii Kliukin, Ildar Musin
Diagnosed-by: Oleksii Kliukin, Ildar Musin
Author: Michael Paquier
Reviewed-by: Masahiko Sawada, Oleksii Kliukin
Discussion: https://postgr.es/m/BF9B38A4-2BFF-46E8-BA87-A2D00A8047A6@hintbits.com
Commit 16be2fd1 introduced the flag DSA_ALLOC_NO_OOM to control whether
the DSA allocator would raise an error or return InvalidDsaPointer on
failure to allocate. One edge case was not handled correctly: if we
fail to allocate an internal "span" object for a large allocation, we
would always return InvalidDsaPointer regardless of the flag; a caller
not expecting that could then dereference a null pointer.
This is a plausible explanation for a one-off report of a segfault.
Remove a redundant pair of braces so that all three stanzas that handle
DSA_ALLOC_NO_OOM match in style, for visual consistency.
While fixing inconsistencies, if FreePageManagerGet() can't supply the
pages that our book-keeping says it should be able to supply, then we
should always report a FATAL error. Previously we treated that as a
regular allocation failure in one code path, but as a FATAL condition
in another.
Back-patch to 10, where dsa.c landed.
Author: Thomas Munro
Reported-by: Jakub Glapa
Discussion: https://postgr.es/m/CAEepm=2oPqXxyWQ-1o60tpOLrwkw=VpgNXqqF1VN2EyO9zKGQw@mail.gmail.com