Views should not have any pg_attribute entries for system columns.
However, we forgot to remove such entries when converting a table to a
view. This could lead to crashes later on, if someone attempted to
reference such a column, as reported by Kohei KaiGai.
This problem is corrected properly in HEAD (by removing the pg_attribute
entries during conversion), but in the back branches we need to defend
against existing mis-converted views. This fix costs us an extra syscache
lookup per system column reference, which is annoying but probably not
really measurable in the big scheme of things.
Parse analysis neglected to cover the case of a WITH clause attached to an
intermediate-level set operation; it only handled WITH at the top level
or WITH attached to a leaf-level SELECT. Per report from Adam Mackler.
In HEAD, I rearranged the order of SelectStmt's fields to put withClause
with the other fields that can appear on non-leaf SelectStmts. In back
branches, leave it alone to avoid a possible ABI break for third-party
code.
Back-patch to 8.4 where WITH support was added.
If a CHECK constraint or index definition contained a whole-row Var (that
is, "table.*"), an attempt to copy that definition via CREATE TABLE LIKE or
table inheritance produced incorrect results: the copied Var still claimed
to have the rowtype of the source table, rather than the created table.
For the LIKE case, it seems reasonable to just throw error for this
situation, since the point of LIKE is that the new table is not permanently
coupled to the old, so there's no reason to assume its rowtype will stay
compatible. In the inheritance case, we should ideally allow such
constraints, but doing so will require nontrivial refactoring of CREATE
TABLE processing (because we'd need to know the OID of the new table's
rowtype before we adjust inherited CHECK constraints). In view of the lack
of previous complaints, that doesn't seem worth the risk in a back-patched
bug fix, so just make it throw error for the inheritance case as well.
Along the way, replace change_varattnos_of_a_node() with a more robust
function map_variable_attnos(), which is capable of being extended to
handle insertion of ConvertRowtypeExpr whenever we get around to fixing
the inheritance case nicely, and in the meantime it returns a failure
indication to the caller so that a helpful message with some context can be
thrown. Also, this code will do the right thing with subselects (if we
ever allow them in CHECK or indexes), and it range-checks varattnos before
using them to index into the map array.
Per report from Sergey Konoplev. Back-patch to all supported branches.
The EvalPlanQual machinery assumes that whole-row Vars generated for the
outputs of non-table RTEs will be of composite types. However, for the
case where the RTE is a function call returning a scalar type, we were
doing the wrong thing, as a result of sharing code with a parser case
where the function's scalar output is wanted. (Or at least, that's what
that case has done historically; it does seem a bit inconsistent.)
To fix, extend makeWholeRowVar's API so that it can support both use-cases.
This fixes Belinda Cussen's report of crashes during concurrent execution
of UPDATEs involving joins to the result of UNNEST() --- in READ COMMITTED
mode, we'd run the EvalPlanQual machinery after a conflicting row update
commits, and it was expecting to get a HeapTuple not a scalar datum from
the "wholerowN" variable referencing the function RTE.
Back-patch to 9.0 where the current EvalPlanQual implementation appeared.
In 9.1 and up, this patch also fixes failure to attach the correct
collation to the Var generated for a scalar-result case. An example:
regression=# select upper(x.*) from textcat('ab', 'cd') x;
ERROR: could not determine which collation to use for upper() function
transform_null_equals is only supposed to affect "foo = NULL" expressions
given directly by the user, not the internal "foo = NULL" expression
generated from CASE-WHEN.
This fixes bug #6242, reported by Sergey. Backpatch to all supported
branches.
We were trying to make that strictly an internal implementation detail,
but it turns out that it's exposed anyway when dumping a view defined
like
CREATE VIEW test_view AS VALUES (1), (2), (3) ORDER BY 1;
This comes out as
CREATE VIEW ... ORDER BY "*VALUES*".column1;
which fails to parse when reloading the dump.
Hacking ruleutils.c to suppress the column qualification looks like it'd
be a risky business, so instead promote the RTE alias to full-fledged
usability.
Per bug #6049 from Dylan Adams. Back-patch to all supported branches.
as a variable or column name, and it's not reserved in recent versions of
the SQL spec either. This became particularly annoying in 9.0, before that
PL/pgSQL replaced variable names in queries with parameter markers, so
it was possible to use OFF and many other backend parser keywords as
variable names. Because of that, backpatch to 9.0.
A couple of places in the planner need to generate whole-row Vars, and were
cutting corners by setting vartype = RECORDOID in the Vars, even in cases
where there's an identifiable named composite type for the RTE being
referenced. While we mostly got away with this, it failed when there was
also a parser-generated whole-row reference to the same RTE, because the
two Vars weren't equal() due to the difference in vartype. Fix by
providing a subroutine the planner can call to generate whole-row Vars
the same way the parser does.
Per bug #5716 from Andrew Tipton. Back-patch to 9.0 where one of the bogus
calls was introduced (the other one is new in HEAD).
In versions 8.2 and up, the grammar allows attaching ORDER BY, LIMIT,
FOR UPDATE, or WITH to VALUES, and hence to INSERT ... VALUES. But the
special-case code for VALUES in transformInsertStmt() wasn't expecting any
of those, and just ignored them, leading to unexpected results. Rather
than complicate the special-case path, just ensure that the presence of any
of those clauses makes us treat the query as if it had a general SELECT.
Per report from Hitoshi Harada.
It now emerges that the JDBC driver expects to be able to use pg_get_expr()
on an output of a sub-SELECT. So extend the check logic to be able to recurse
into a sub-SELECT to see if the argument is ultimately coming from an
appropriate column. Per report from Thomas Kellerer.
pointed out, it would need a 2nd pass after the whole query is processed to
correctly check that an unknown Param is coerced to the same target type
everywhere. Adding the 2nd pass would add a lot more code, which doesn't
seem worth the risk given that there isn't much of a use case for passing
unknown Params in the first place. The code would work without that check,
but it might be confusing and the behavior would be different from the
varparams case.
Instead, just coerce all unknown params in a PL/pgSQL USING clause to text.
That's simple, and is usually what users expect.
Revert the patch in CVS HEAD and master, and backpatch the new solution to
8.4. Unlike the previous solution, this applies easily to 8.4 too.
The implicitly created sequence was created as owned by the current user,
who could be different from the table owner, eg if current user is a
superuser or some member of the table's owning role. This caused sanity
checks in the SEQUENCE OWNED BY code to spit up. Although possibly we
don't need those sanity checks, the safest fix seems to be to make sure
the implicit sequence is assigned the same owner role as the table has.
(We still do all permissions checks as the current user, however.)
Per report from Josh Berkus.
Back-patch to 9.0. The bug goes back to the invention of SEQUENCE OWNED BY
in 8.2, but the fix requires an API change for DefineRelation(), which seems
to have potential for breaking third-party code if done in a minor release.
Given the lack of prior complaints, it's probably not worth fixing in the
stable branches.
parse_analyze() function. That case occurs e.g with PL/pgSQL
EXECUTE ... USING 'stringconstant'.
The coercion with a CoerceViaIO node. The result is similar to the coercion
via input function performed for unknown constants in coerce_type(),
except that this happens at runtime.
Backpatch to 9.0. The issue is present in 8.4 as well, but the coerce param
hook infrastructure this patch relies on was introduced in 9.0. Given the
lack of user reports and harmlessness of the bug, it's not worth attempting
a different fix just for 8.4.
matching a call like f(x, ORDER BY y,z). It could be that what the user
really wants is f(x,z ORDER BY y). We now have pretty conclusive evidence
that many people won't understand this problem without concrete guidance,
so give it to them. Per further discussion of the string_agg() problem.
look through join alias Vars to avoid breaking join queries, and
move the test to someplace where it will catch more possible ways
of calling a function. We still ought to throw away the whole thing
in favor of a data-type-based solution, but that's not feasible in
the back branches.
This needs to be back-patched further than 9.0, but I don't have time
to do so today. Committing now so that the fix gets into 9.0beta4.
any implicit casting previously applied to the targetlist item. This is
reasonable because the implicit cast, by definition, wasn't written by the
user; so we are preserving the expected behavior that ORDER BY items match
textually equivalent tlist items. The case never arose before because there
couldn't be any implicit casting of a top-level SELECT item before we process
ORDER BY etc. But now it can arise in the context of aggregates containing
ORDER BY clauses, since the "targetlist" is the already-casted list of
arguments for the aggregate. The net effect is that the datatype used for
ORDER BY/DISTINCT purposes is the aggregate's declared input type, not that
of the original input column; which is a bit debatable but not horrendous,
and to do otherwise would require major rework that doesn't seem justified.
Per bug #5564 from Daniel Grace. Back-patch to 9.0 where aggregate ORDER BY
was implemented.
but we have nevertheless exposed them to users via pg_get_expr(). It would
be too much maintenance effort to rigorously check the input, so put a hack
in place instead to restrict pg_get_expr() so that the argument must come
from one of the system catalog columns known to contain valid expressions.
Per report from Rushabh Lathia. Backpatch to 7.4 which is the oldest
supported version at the moment.
"val AS name" to "name := val", as per recent discussion.
This patch catches everything in the original named-parameters patch,
but I'm not certain that no other dependencies snuck in later (grepping
the source tree for all uses of AS soon proved unworkable).
In passing I note that we've dropped the ball at least once on keeping
ecpg's lexer (as opposed to parser) in sync with the backend. It would
be a good idea to go through all of pgc.l and see if it's in sync now.
I didn't attempt that at the moment.
My initial impression that glibc was measuring the precision in characters
(which is what the Linux man page says it does) was incorrect. It does take
the precision to be in bytes, but it also tries to truncate the string at a
character boundary. The bottom line remains the same: it will mess up
if the string is not in the encoding it expects, so we need to avoid %.*s
anytime there's a significant risk of that. Previous code changes are still
good, but adjust the comments to reflect this knowledge. Per research by
Hernan Gonzalez.
Depending on which spec you read, field widths and precisions in %s may be
counted either in bytes or characters. Our code was assuming bytes, which
is wrong at least for glibc's implementation, and in any case libc might
have a different idea of the prevailing encoding than we do. Hence, for
portable results we must avoid using anything more complex than just "%s"
unless the string to be printed is known to be all-ASCII.
This patch fixes the cases I could find, including the psql formatting
failure reported by Hernan Gonzalez. In HEAD only, I also added comments
to some places where it appears safe to continue using "%.*s".
to transformAggregateCall, instead of abusing fields in Aggref to carry them
temporarily. No change in functionality but hopefully the code is a bit
clearer now. Per gripe from Gokulakannan Somasundaram.
This operates in the same way as other CREATE OR REPLACE commands, ie,
it replaces everything but the ownership and ACL lists of an existing
entry, and requires the caller to have owner privileges for that entry.
While modifying an existing language has some use in development scenarios,
in typical usage all the "replaced" values come from pg_pltemplate so there
will be no actual change in the language definition. The reason for adding
this is mainly to allow programs to ensure that a language exists without
triggering an error if it already does exist.
This commit just adds and documents the new option. A followon patch
will use it to clean up some unpleasant cases in pg_dump and pg_regress.
In addition, add support for a "payload" string to be passed along with
each notify event.
This implementation should be significantly more efficient than the old one,
and is also more compatible with Hot Standby usage. There is not yet any
facility for HS slaves to receive notifications generated on the master,
although such a thing is possible in future.
Joachim Wieland, reviewed by Jeff Davis; also hacked on by me.
The purpose of this change is to eliminate the need for every caller
of SearchSysCache, SearchSysCacheCopy, SearchSysCacheExists,
GetSysCacheOid, and SearchSysCacheList to know the maximum number
of allowable keys for a syscache entry (currently 4). This will
make it far easier to increase the maximum number of keys in a
future release should we choose to do so, and it makes the code
shorter, too.
Design and review by Tom Lane.
before we start analyzing the parent statement. This is to make it
more clear that the WITH isn't affected by anything in the parent.
I don't believe there's any actual bug here, because the stuff that
was being done before WITH didn't affect subqueries; but it's certainly
a potential for error (and apparently misled Marko into committing some
real errors...).
This patch allows the frame to start from CURRENT ROW (in either RANGE or
ROWS mode), and it also adds support for ROWS n PRECEDING and ROWS n FOLLOWING
start and end points. (RANGE value PRECEDING/FOLLOWING isn't there yet ---
the grammar works, but that's all.)
Hitoshi Harada, reviewed by Pavel Stehule
VACUUM FULL INPLACE), along with a boatload of subsidiary code and complexity.
Per discussion, the use case for this method of vacuuming is no longer large
enough to justify maintaining it; not to mention that we don't wish to invest
the work that would be needed to make it play nicely with Hot Standby.
Aside from the code directly related to old-style VACUUM FULL, this commit
removes support for certain WAL record types that could only be generated
within VACUUM FULL, redirect-pointer removal in heap_page_prune, and
nontransactional generation of cache invalidation sinval messages (the last
being the sticking point for Hot Standby).
We still have to retain all code that copes with finding HEAP_MOVED_OFF and
HEAP_MOVED_IN flag bits on existing tuples. This can't be removed as long
as we want to support in-place update from pre-9.0 databases.
of shared or nailed system catalogs. This has two key benefits:
* The new CLUSTER-based VACUUM FULL can be applied safely to all catalogs.
* We no longer have to use an unsafe reindex-in-place approach for reindexing
shared catalogs.
CLUSTER on nailed catalogs now works too, although I left it disabled on
shared catalogs because the resulting pg_index.indisclustered update would
only be visible in one database.
Since reindexing shared system catalogs is now fully transactional and
crash-safe, the former special cases in REINDEX behavior have been removed;
shared catalogs are treated the same as non-shared.
This commit does not do anything about the recently-discussed problem of
deadlocks between VACUUM FULL/CLUSTER on a system catalog and other
concurrent queries; will address that in a separate patch. As a stopgap,
parallel_schedule has been tweaked to run vacuum.sql by itself, to avoid
such failures during the regression tests.
and implement OVERLAY() for bit strings and bytea.
In passing also convert text OVERLAY() to a true built-in, instead of
relying on a SQL function.
Leonardo F, reviewed by Kevin Grittner
Attributes can now have options, just as relations and tablespaces do, and
the reloptions code is used to parse, validate, and store them. For
simplicity and because these options are not performance critical, we store
them in a separate cache rather than the main relcache.
Thanks to Alex Hunsaker for the review.
parse analysis phase, rather than at execution time. This makes parameter
handling work the same as it does in ordinary plannable queries, and in
particular fixes the incompatibility that Pavel pointed out with plpgsql's
new handling of variable references. plancache.c gets a little bit
grottier, but the alternatives seem worse.
in the parameter array. Noted while experimenting with an example
from Pavel. This wouldn't come up in normal use, but it ought to honor
the specification that a parameter array can have unused slots.
VACUUM FULL was renamed to VACUUM FULL INPLACE. Also added a new
option -i, --inplace for vacuumdb to perform FULL INPLACE vacuuming.
Since the new VACUUM FULL uses CLUSTER infrastructure, we cannot
use it for system tables. VACUUM FULL for system tables always
fall back into VACUUM FULL INPLACE silently.
Itagaki Takahiro, reviewed by Jeff Davis and Simon Riggs.
This patch only supports seq_page_cost and random_page_cost as parameters,
but it provides the infrastructure to scalably support many more.
In particular, we may want to add support for effective_io_concurrency,
but I'm leaving that as future work for now.
Thanks to Tom Lane for design help and Alvaro Herrera for the review.
8.2beta but never carried out. This avoids repetitive tests of whether the
argument is of scalar or composite type. Also, be a bit more paranoid about
composite arguments in some places where we previously weren't checking.
probably got there via blind copy-and-paste from one of the legitimate
callers, so rearrange and comment that code a bit to make it clearer that
this isn't a necessary prerequisite to hash_create. Per observation
from Robert Haas.