It typically is a "long", but it seems possible that on some platforms
it wouldn't be. In any case, this silences a compiler warning on
OpenBSD (cf buildfarm member curculio).
While at it, use snprintf not sprintf. This format string couldn't
possibly overrun the supplied buffer, but that doesn't seem like
a good reason not to use the safer style.
Oversight in commit f828654e1. Back-patch to 9.6 where that came in.
Previously, a prepared statement created via a Parse message could get
a parallel plan, but one created with a PREPARE statement could not.
This state of affairs was due to confusion on my (rhaas) part: I
erroneously believed that a CREATE TABLE .. AS EXECUTE statement could
only be performed with a prepared statement by PREPARE, but in fact
one created by a Prepare message works just as well. Therefore, it
makes no sense to allow parallel query in one case but not the other.
To fix, allow parallel query with all prepared statements, but run
the parallel plan serially (i.e. without workers) in the case of
CREATE TABLE .. AS EXECUTE. Also, document this.
Amit Kapila and Tobias Bussman, plus an extra sentence of
documentation by me.
This reverts commit 4e01ecae98275298c680c92fdba62daf603dc98e.
Per Tom Lane, changing the definition of StringInfoData amounts to an
ABI break, which is unacceptable in back branches.
The previously code worked OK as long as a Gather node was never
rescanned, or if it was rescanned, as long as it got at least as
many workers on rescan as it had originally. But if the number
of workers ever decreased on a rescan, then it could crash.
Andreas Seltenreich
Previously gin_desc() displayed incorrect output "unknown action 0"
for XLOG_GIN_INSERT and XLOG_GIN_VACUUM_DATA_LEAF_PAGE records with
valid actions. The cause of this problem was that gin_desc() wrongly
used XLogRecGetData() to extract data from those records.
Since they were registered by XLogRegisterBufData(), gin_desc() should
have used XLogRecGetBlockData(), instead, like gin_redo().
Also there were other differences about how to treat XLOG_GIN_INSERT
record between gin_desc() and gin_redo().
This commit fixes gin_desc() routine so that it treats those records
in the same way as gin_redo().
Batch-patch to 9.5 where WAL record format was revamped and
XLogRegisterBufData() was added.
Reported-By: Andres Freund
Reviewed-By: Tom Lane
Discussion: <20160509194645.7lewnpw647zegx2m@alap3.anarazel.de>
transformOnConflictClause incremented p_next_resno while generating the
phony targetlist for the EXCLUDED pseudo-rel. Then that field got
incremented some more during transformTargetList, possibly leading to
free_parsestate concluding that we'd overrun the allowed length of a tlist,
as reported by Justin Pryzby.
We could fix this by resetting p_next_resno to 1 after using it for the
EXCLUDED pseudo-rel tlist, but it seems easier and less coupled to other
places if we just don't use that field at all in this loop. (Note that
this doesn't change anything about the resnos that end up appearing in
the main target list, because those are all replaced with target-column
numbers by updateTargetListEntry.)
In passing, fix incorrect type OID assigned to the whole-row Var for
"EXCLUDED.*" (somehow this escaped having any bad consequences so far,
but it's certainly wrong); remove useless assignment to var->location;
pstrdup the column names in case of a relcache flush; and improve
nearby comments.
Back-patch to 9.5 where ON CONFLICT was introduced.
Report: https://postgr.es/m/20161204163237.GA8030@telsasoft.com
This has no effect in the most conventional case, where no relevant DLL
uses a debug build. For an example where it does matter, given a debug
build of MIT Kerberos, the krb_server_keyfile parameter usually had no
effect. Since nobody wants a Heisenbug, back-patch to 9.2 (all
supported versions).
Christian Ullrich, reviewed by Michael Paquier.
In accordance with its own documentation, invoke CloseHandle() only when
directed in the documentation for the function that furnished the
handle. GetModuleHandle() does not so direct. We have been issuing
this call only in the rare event that a CRT DLL contains no "_putenv"
symbol, so lack of bug reports is uninformative. Back-patch to 9.2 (all
supported versions).
Christian Ullrich, reviewed by Michael Paquier.
Replace use of plain 0 as a null pointer constant. In comments, update
terminology and lessen redundancy. Back-patch to 9.2 (all supported
versions) for the convenience of back-patching the next two commits.
Christian Ullrich and Noah Misch, reviewed (in earlier versions) by
Michael Paquier.
Our documentation states that our maximum field size is 1 GB, and that
our maximum row size of 1.6 TB. However, while this might be attainable
in theory with enough contortions, it is not workable in practice; for
starters, pg_dump fails to dump tables containing rows larger than 1 GB,
even if individual columns are well below the limit; and even if one
does manage to manufacture a dump file containing a row that large, the
server refuses to load it anyway.
This commit enables dumping and reloading of such tuples, provided two
conditions are met:
1. no single column is larger than 1 GB (in output size -- for bytea
this includes the formatting overhead)
2. the whole row is not larger than 2 GB
There are three related changes to enable this:
a. StringInfo's API now has two additional functions that allow creating
a string that grows beyond the typical 1GB limit (and "long" string).
ABI compatibility is maintained. We still limit these strings to 2 GB,
though, for reasons explained below.
b. COPY now uses long StringInfos, so that pg_dump doesn't choke
trying to emit rows longer than 1GB.
c. heap_form_tuple now uses the MCXT_ALLOW_HUGE flag in its allocation
for the input tuple, which means that large tuples are accepted on
input. Note that at this point we do not apply any further limit to the
input tuple size.
The main reason to limit to 2 GB is that the FE/BE protocol uses 32 bit
length words to describe each row; and because the documentation is
ambiguous on its signedness and libpq does consider it signed, we cannot
use the highest-order bit. Additionally, the StringInfo API uses "int"
(which is 4 bytes wide in most platforms) in many places, so we'd need
to change that API too in order to improve, which has lots of fallout.
Backpatch to 9.5, which is the oldest that has
MemoryContextAllocExtended, a necessary piece of infrastructure. We
could apply to 9.4 with very minimal additional effort, but any further
than that would require backpatching "huge" allocations too.
This is the largest set of changes we could find that can be
back-patched without breaking compatibility with existing systems.
Fixing a bigger set of problems (for example, dumping tuples bigger than
2GB, or dumping fields bigger than 1GB) would require changing the FE/BE
protocol and/or changing the StringInfo API in an ABI-incompatible way,
neither of which would be back-patchable.
Authors: Daniel Vérité, Álvaro Herrera
Reviewed by: Tomas Vondra
Discussion: https://postgr.es/m/20160229183023.GA286012@alvherre.pgsql
Per bug #14441 from Mark Pether, the documentation could be misread,
mainly because some of the examples failed to show what happens with
a multicharacter "characters to trim" string. Also, while the text
description in most of these entries was fairly clear that the
"characters" argument is a set of characters not a substring to match,
some of them used variant wording that was a bit less clear.
trim() itself suffered from both deficiencies and was thus pretty
misinterpretable.
Also fix failure to explain which of LEADING/TRAILING/BOTH is the
default.
Discussion: https://postgr.es/m/20161130011710.6539.53657@wrigleys.postgresql.org
consider_parallel_nestloop passed the wrong jointype down to its
subroutines for JOIN_UNIQUE_INNER cases (it should pass JOIN_INNER), and it
thought that it could pass paths other than innerrel->cheapest_total_path
to create_unique_path, which create_unique_path is not on board with.
These bugs would lead to assertion failures or other errors, suggesting
that this code path hasn't been tested much.
hash_inner_and_outer's code for parallel join effectively treated both
JOIN_UNIQUE_OUTER and JOIN_UNIQUE_INNER the same as JOIN_INNER (for
different reasons :-(), leading to incorrect plans that treated a semijoin
as if it were a plain join.
Michael Day submitted a test case demonstrating that hash_inner_and_outer
failed for JOIN_UNIQUE_OUTER, and I found the other cases through code
review.
Report: https://postgr.es/m/D0E8A029-D1AC-42E8-979A-5DE4A77E4413@rcmail.com
The documentation around the -b/--blobs option to pg_dump seemed to
imply that it might be possible to add blobs to a "schema-only" dump or
similar. Clarify that blobs are data and therefore will only be
included in dumps where data is being included, even when -b is used to
request blobs be included.
The -b option has been around since before 9.2, so back-patch to all
supported branches.
Discussion: https://postgr.es/m/20161119173316.GA13284@tamriel.snowman.net
An example in the psql documentation had an incorrect field name from
what the command actually produced.
Pointed out by Fabien COELHO
Back-patch to 9.6 where the example was added.
Discussion: https://postgr.es/m/alpine.DEB.2.20.1611291349400.19314@lancre
Commit 08dd23cec introduced an exception to the rule that extension member
objects can only be dropped as part of dropping the whole extension,
intending to allow such drops while running the extension's own creation or
update scripts. However, the exception was only applied at the outermost
recursion level, because it was modeled on a pre-existing check to ignore
dependencies on objects listed in pendingObjects. Bug #14434 from Philippe
Beaudoin shows that this is inadequate: in some cases we can reach an
extension member object by recursion from another one. (The bug concerns
the serial-sequence case; I'm not sure if there are other cases, but there
might well be.)
To fix, revert 08dd23cec's changes to findDependentObjects() and instead
apply the creating_extension exception regardless of stack level.
Having seen this example, I'm a bit suspicious that the pendingObjects
logic is also wrong and such cases should likewise be allowed at any
recursion level. However, changing that would interact in subtle ways
with the recursion logic (at least it would need to be moved to after the
recursing-from check). Given that the code's been like that a long time,
I'll refrain from touching it without a clear example showing it's wrong.
Back-patch to all active branches. In HEAD and 9.6, where suitable
test infrastructure exists, add a regression test case based on the
bug report.
Report: <20161125151448.6529.33039@wrigleys.postgresql.org>
Discussion: <13224.1480177514@sss.pgh.pa.us>
Instead of confusingly stating platform-dependent defaults for these
parameters in the comments in postgresql.conf.sample (with the main
entry being a lie on Linux), teach initdb to install the correct
platform-dependent value in postgresql.conf, similarly to the way
we handle other platform-dependent defaults. This won't do anything
for existing 9.6 installations, but since it's effectively only a
documentation improvement, that seems OK.
Since this requires initdb to have access to the default values,
move the #define's for those to pg_config_manual.h; the original
placement in bufmgr.h is unworkable because that file can't be
included by frontend programs.
Adjust the default value for wal_writer_flush_after so that it is 1MB
regardless of XLOG_BLCKSZ, conforming to what is stated in both the
SGML docs and postgresql.conf. (We could alternatively make it scale
with XLOG_BLCKSZ, but I'm not sure I see the point.)
Copy-edit related SGML documentation.
Fabien Coelho and Tom Lane, per a gripe from Tomas Vondra.
Discussion: <30ebc6e3-8358-09cf-44a8-578252938424@2ndquadrant.com>
Andreas Seltenreich found another case where we were being too optimistic
about allowing a plan to be considered parallelizable despite it containing
initPlans. It seems like the real issue here is that if we know we are
going to tack initPlans onto the topmost Plan node for a subquery, we
had better mark that subquery's result Paths as not-parallel-safe. That
fixes this problem and allows reversion of a kluge (added in commit
7b67a0a49 and extended in f24cf960d) to not trust the parallel_safe flag
at top level.
Discussion: <874m2w4k5d.fsf@ex.ansel.ydns.eu>
When dropping a foreign key constraint with ALTER TABLE DROP CONSTRAINT,
we refuse the drop if there are any pending trigger events on the named
table; this ensures that we won't remove the pg_trigger row that will be
consulted by those events. But we should make the same check for the
referenced relation, else we might remove a due-to-be-referenced pg_trigger
row for that relation too, resulting in "could not find trigger NNN" or
"relation NNN has no triggers" errors at commit. Per bug #14431 from
Benjie Gillam. Back-patch to all supported branches.
Report: <20161124114911.6530.31200@wrigleys.postgresql.org>
Previously, requesting commit timestamp for transactions
FrozenTransactionId and BootstrapTransactionId resulted in an error.
But since those values can validly appear in committed tuples' Xmin,
this behavior is unhelpful and error prone: each caller would have to
special-case those values before requesting timestamp data for an Xid.
We already have a perfectly good interface for returning "the Xid you
requested is too old for us to have commit TS data for it", so let's use
that instead.
Backpatch to 9.5, where commit timestamps appeared.
Author: Craig Ringer
Discussion: https://www.postgresql.org/message-id/CAMsr+YFM5Q=+ry3mKvWEqRTxrB0iU3qUSRnS28nz6FJYtBwhJg@mail.gmail.com
When rebuilding an existing index, ALTER TABLE correctly kept the
physical file in the same tablespace, but it messed up the pg_class
entry if the index had been in the database's default tablespace
and "default_tablespace" was set to some non-default tablespace.
This led to an inaccessible index.
Fix by fixing pg_get_indexdef_string() to always include a tablespace
clause, whether or not the index is in the default tablespace. The
previous behavior was installed in commit 537e92e41, and I think it just
wasn't thought through very clearly; certainly the possible effect of
default_tablespace wasn't considered. There's some risk in changing the
behavior of this function, but there are no other call sites in the core
code. Even if it's being used by some third party extension, it's fairly
hard to envision a usage that is okay with a tablespace clause being
appended some of the time but can't handle it being appended all the time.
Back-patch to all supported versions.
Code fix by me, investigation and test cases by Michael Paquier.
Discussion: <1479294998857-5930602.post@n3.nabble.com>
Before commit 906bfcad7, we were not actually processing the righthand
side of a multiple-column assignment in UPDATE as a row constructor:
it was just a parenthesized list of expressions. Call it that rather
than risking confusion by people who would expect the documented behaviors
of row constructors to apply.
Back-patch to 9.5; before that, the text correctly described the construct
as a "list of independent expressions".
Discussion: <16288.1479610770@sss.pgh.pa.us>
Create a section specifically for the syntactic rules around whole-row
variable usage, such as expansion of "foo.*". This was previously
documented only haphazardly, with some critical info buried in
unexpected places like xfunc-sql-composite-functions. Per repeated
questions in different mailing lists.
Discussion: <16288.1479610770@sss.pgh.pa.us>
There are assorted references to RETURNING in Part II, but nothing
that would qualify as an explanation of the feature, which seems
like an oversight considering how useful it is. Add something.
Noted while looking for a place to point a cross-reference to ...
A little COLLATE "C" goes a long way.
Pavel Stehule, per suggestion from Craig Ringer
Discussion: <CAFj8pRA8nJZcozgxN=RMSqMmKuHVOkcGAAKPKdFeiMWGDSUDLA@mail.gmail.com>
The code was intentionally not very careful about leaking strdup'd
strings in case of an error. That was forgivable probably, but it
also failed to notice strdup() failures, which could lead to subsequent
null-pointer-dereference crashes, since many callers unsurprisingly
didn't check for null pointers in the struct lconv fields. An even
worse problem is that it could throw error while we were setlocale'd
to a non-C locale, causing unwanted behavior in subsequent libc calls.
Rewrite to ensure that we cannot throw elog(ERROR) until after we've
restored the previous locale settings, or at least attempted to.
(I'm sorely tempted to make restore failure be a FATAL error, but
will refrain for the moment.) Having done that, it's not much more
work to ensure that we clean up strdup'd storage on the way out, too.
This code is substantially the same in all supported branches, so
back-patch all the way.
Michael Paquier and Tom Lane
Discussion: <CAB7nPqRMbGqa_mesopcn4MPyTs34eqtVEK7ELYxvvV=oqS00YA@mail.gmail.com>
We mustn't force parallel mode if the query has any subplans, since
ExecSerializePlan doesn't transmit them to workers. Testing
top_plan->initPlan is inadequate because (1) there might be initPlans
attached to lower plan nodes, and (2) non-initPlan subplans don't
work either. There's certainly room for improvement in those
restrictions, but for the moment that's what we've got.
Amit Kapila, per report from Andreas Seltenreich
Discussion: <8737im6pmh.fsf@credativ.de>
Because we use transformTargetList() for UPDATE as well as SELECT
tlists, the code accidentally tried to expand a "*" reference into
several columns. This is nonsensical, because the UPDATE syntax
provides exactly one target column to put the value into. The
immediate result was that transformUpdateTargetList() got confused
and reported "UPDATE target count mismatch --- internal error".
It seems better to treat such a reference as a plain whole-row
variable, as it would be in other contexts. (This could produce
useful results when the target column is of composite type.)
Fix by tweaking transformTargetList() to perform *-expansion only
conditionally, depending on its exprKind parameter.
Back-patch to 9.3. The problem exists further back, but a fix would be
much more invasive before that, because transformTargetList() wasn't
told what kind of list it was working on. Doesn't seem worth the
trouble given the lack of field reports. (I only noticed it because
I was checking the code while trying to improve the documentation about
how we handle "foo.*".)
Discussion: <4308.1479595330@sss.pgh.pa.us>
The serialization code dumped core for a string-valued GUC whose value
is NULL, which is a legal state. The infrastructure isn't capable of
transmitting that state exactly, but fortunately, transmitting an empty
string instead should be close enough (compare, eg, commit e45e990e4).
The code potentially underestimated the space required to format a
real-valued variable, both because it made an unwarranted assumption that
%g output would never be longer than %e output, and because it didn't count
right even for %e format. In practice this would pretty much always be
masked by overestimates for other variables, but it's still wrong.
Also fix boundary-case error in read_gucstate, incorrect handling of the
case where guc_sourcefile is non-NULL but zero length (not clear that can
happen, but if it did, this code would get totally confused), and
confusingly useless check for a NULL result from read_gucstate.
Andreas Seltenreich discovered the core dump; other issues noted while
reading nearby code. Back-patch to 9.5 where this code was introduced.
Michael Paquier and Tom Lane
Discussion: <871sy78wno.fsf@credativ.de>
Teach it not to complain if the dropStmt attached to an archive entry
is actually spelled CREATE OR REPLACE VIEW, since that will happen due to
an upcoming bug fix. Also, if it doesn't recognize a dropStmt, have it
print a WARNING and then emit the dropStmt unmodified. That seems like a
much saner behavior than Assert'ing or dumping core due to a null-pointer
dereference, which is what would happen before :-(.
Back-patch to 9.4 where this option was introduced.
Discussion: <19092.1479325184@sss.pgh.pa.us>
Replay of XLOG_BTREE_VACUUM during Hot Standby was previously thought to
require complex interlocking that matched the requirements on the
master. This required an O(N) operation that became a significant
problem with large indexes, causing replication delays of seconds or in
some cases minutes while the XLOG_BTREE_VACUUM was replayed.
This commit skips the “pin scan” that was previously required, by
observing in detail when and how it is safe to do so, with full
documentation. The pin scan is skipped only in replay; the VACUUM code
path on master is not touched here.
No tests included. Manual tests using an additional patch to view WAL records
and their timing have shown the change in WAL records and their handling has
successfully reduced replication delay.
This is a back-patch of commits 687f2cd7a015, 3e4b7d87988f, b60284261375
by Simon Riggs, to branches 9.4 and 9.5. No further backpatch is
possible because this depends on catalog scans being MVCC. I (Álvaro)
additionally updated a slight problem in the README, which explains why
this touches the 9.6 and master branches.
On Windows, libc will mask \r\n line endings for us, since we read the
password file in text mode. But that doesn't happen on Unix. People
who share password files across both systems might have \r\n line endings
in a file they use on Unix, so as a convenience, ignore trailing \r.
Per gripe from Josh Berkus.
In passing, put the existing check for empty line somewhere where it's
actually useful, ie after stripping the newline not before.
Vik Fearing, adjusted a bit by me
Discussion: <0de37763-5843-b2cc-855e-5d0e5df25807@agliodbs.com>
The CatalogSnapshot was not plugged into SnapshotResetXmin()'s accounting
for whether MyPgXact->xmin could be cleared or advanced. In normal
transactions this was masked by the fact that the transaction snapshot
would be older, but during backend startup and certain utility commands
it was possible to re-use the CatalogSnapshot after MyPgXact->xmin had
been cleared, meaning that recently-deleted rows could be pruned even
though this snapshot could still see them, causing unexpected catalog
lookup failures. This effect appears to be the explanation for a recent
failure on buildfarm member piculet.
To fix, add the CatalogSnapshot to the RegisteredSnapshots heap whenever
it is valid.
In the previous logic, it was possible for the CatalogSnapshot to remain
valid across waits for client input, but with this change that would mean
it delays advance of global xmin in cases where it did not before. To
avoid possibly causing new table-bloat problems with clients that sit idle
for long intervals, add code to invalidate the CatalogSnapshot before
waiting for client input. (When the backend is busy, it's unlikely that
the CatalogSnapshot would be the oldest snap for very long, so we don't
worry about forcing early invalidation of it otherwise.)
In passing, remove the CatalogSnapshotStale flag in favor of using
"CatalogSnapshot != NULL" to represent validity, as we do for the other
special snapshots in snapmgr.c. And improve some obsolete comments.
No regression test because I don't know a deterministic way to cause this
failure. But the stress test shown in the original discussion provokes
"cache lookup failed for relation 1255" within a few dozen seconds for me.
Back-patch to 9.4 where MVCC catalog scans were introduced. (Note: it's
quite easy to produce similar failures with the same test case in branches
before 9.4. But MVCC catalog scans were supposed to fix that.)
Discussion: <16447.1478818294@sss.pgh.pa.us>
Commit 3c4cf080879b should have removed SET TABLESPACE from the synopsis
of ALTER MATERIALIZE VIEW as a possible "action" when it added a
separate line for it in the main command listing, but failed to.
Repair.
Backpatch to 9.4, like the aforementioned commit.
The documentation for ts_headline() recommends using a sub-select to
avoid extra evaluations of ts_headline() in a query with ORDER BY+LIMIT.
Since commit 9118d03a8 this contortionism is unnecessary, so remove the
recommendation. Noted by Oleg Bartunov.
Discussion: <CAF4Au4w6rrH_j1bvVhzpOsRiHCog7sGJ3LSX0tY8ZdwhHT88LQ@mail.gmail.com>
Commit 23a27b039 widened these from uint32 to uint64, but I overlooked
that the documentation explicitly showed them as uint32. Per report
from Vicky Vergara.
Report: <20161111135422.8761.36733@wrigleys.postgresql.org>
The reloptions stuff allows this option to be set on a matview.
While it's questionable whether that is useful or was really intended,
it does work, and we shouldn't change that in minor releases. Commit
e3e66d8a9 disabled the option since I didn't realize that it was
possible for it to be set on a matview. Tweak the test to re-allow it.
Discussion: <19749.1478711862@sss.pgh.pa.us>
The plan generated for sorted partial aggregation with "GROUP BY constant"
included a Sort node with no sort keys, which the executor does not like.
Per report from Steve Randall. I'd add a regression test case if I could
think of a compact one, but it doesn't seem worth expending lots of cycles
on.
Report: <CABVd52UAdGXpg_rCk46egpNKYdXOzCjuJ1zG26E2xBe_8bj+Fg@mail.gmail.com>
In each case, absence of a trailing newline would itself constitute a
PostgreSQL bug. Therefore, this slightly enhances the changed tests.
This works around a bug that last appeared in Perl 5.8.8, fixing
src/test/modules/test_pg_dump when run against that version. Commit
e7293e3271bf618eeb2d4779a15fc516a69fe463 worked around the bug, but the
subsequent addition of test_pg_dump introduced affected code. As that
commit had shown, slight increases in pattern complexity can suppress
the bug. This commit edits qr/foo$/m patterns too complex to encounter
the bug today, for style consistency and robustness against unrelated
pattern changes. Back-patch to 9.6, where test_pg_dump was introduced.
As of this writing, a fresh MSYS installation includes an affected Perl
5.8.8. The Perl 5.8.8 in Red Hat Enterprise Linux 5.11 carries a patch
that renders it unaffected, but the Perl 5.8.5 of Red Hat Enterprise
Linux 4.4 is affected.
We really ought to make StdRdOptions and the other decoded forms of
reloptions self-identifying, but for the moment, assume that only plain
relations could possibly be user_catalog_tables. Fixes problem with bogus
"ON CONFLICT is not supported on table ... used as a catalog table" error
when target is a view with cascade option.
Discussion: <26681.1477940227@sss.pgh.pa.us>
This was already fixed in HEAD as part of 6ad8ac60 but was not
backpatched.
Also change the way pg_xlog is handled to be the same as the other
directories.
Patch from me with pg_xlog addition from Michael Paquier, test updates
from David Steele.
For a very long time, pltcl's spi_exec and spi_execp commands have had
a behavior of storing the current row number as an element of output
arrays, but this was never documented. Fix that.
For an equally long time, pltcl_trigger_handler had a behavior of silently
ignoring ".tupno" as an output column name, evidently so that the result
of spi_exec could be used directly as a trigger result tuple. Not sure
how useful that really is, but in any case it's bad that it would break
attempts to use ".tupno" as an actual column name. We can fix it by not
checking for ".tupno" until after we check for a column name match. This
comports with the effective behavior of spi_exec[p] that ".tupno" is only
magic when you don't have an actual column named that.
In passing, wordsmith the description of returning modified tuples from
a pltcl trigger.
Noted while working on Jim Nasby's patch to support composite results
from pltcl. The inability to return trigger tuples using ".tupno" as
a column name is a bug, so back-patch to all supported branches.