This patch only supports seq_page_cost and random_page_cost as parameters,
but it provides the infrastructure to scalably support many more.
In particular, we may want to add support for effective_io_concurrency,
but I'm leaving that as future work for now.
Thanks to Tom Lane for design help and Alvaro Herrera for the review.
Modify pg_dump --binary-upgrade and add backend support routines to
support the preservation of pg_type oids when doing a binary upgrade.
This allows user-defined composite types and arrays to be binary
upgraded.
Index expression columns are now named after the FigureColname result for
their expressions, rather than always being "pg_expression_N". Digits are
appended to this name if needed to make the column name unique within the
index. (That happens for regular columns too, thus fixing the old problem
that CREATE INDEX fooi ON foo (f1, f1) fails. Before exclusion indexes
there was no real reason to do such a thing, but now maybe there is.)
Default names for indexes and associated constraints now include the column
names of all their columns, not only the first one as in previous practice.
(Of course, this will be truncated as needed to fit in NAMEDATALEN. Also,
pkey indexes retain the historical behavior of not naming specific columns
at all.)
An example of the results:
regression=# create table foo (f1 int, f2 text,
regression(# exclude (f1 with =, lower(f2) with =));
NOTICE: CREATE TABLE / EXCLUDE will create implicit index "foo_f1_lower_exclusion" for table "foo"
CREATE TABLE
regression=# \d foo_f1_lower_exclusion
Index "public.foo_f1_lower_exclusion"
Column | Type | Definition
--------+---------+------------
f1 | integer | f1
lower | text | lower(f2)
btree, for table "public.foo"
This patch also removes buffer-usage statistics from the track_counts
output, since this (or the global server statistics) is deemed to be a better
interface to this information.
Itagaki Takahiro, reviewed by Euler Taveira de Oliveira.
Without these functions, anyone outside of explain.c can't actually use
ExplainPrintPlan, because the ExplainState won't be initialized properly.
The user-visible result of this was a crash when using auto_explain with
the JSON output format.
Report by Euler Taveira de Oliveira. Analysis by Tom Lane. Patch by me.
support any indexable commutative operator, not just equality. Two rows
violate the exclusion constraint if "row1.col OP row2.col" is TRUE for
each of the columns in the constraint.
Jeff Davis, reviewed by Robert Haas
checked to determine whether the trigger should be fired.
For BEFORE triggers this is mostly a matter of spec compliance; but for AFTER
triggers it can provide a noticeable performance improvement, since queuing of
a deferred trigger event and re-fetching of the row(s) at end of statement can
be short-circuited if the trigger does not need to be fired.
Takahiro Itagaki, reviewed by KaiGai Kohei.
strength of database passwords, and create a sample implementation of
such a hook as a new contrib module "passwordcheck".
Laurenz Albe, reviewed by Takahiro Itagaki
In VACUUM FULL, an interrupt after the initial transaction has been recorded
as committed can cause postmaster to restart with the following error message:
PANIC: cannot abort transaction NNNN, it was already committed
This problem has been reported many times.
In lazy VACUUM, an interrupt after the table has been truncated by
lazy_truncate_heap causes other backends' relcache to still point to the
removed pages; this can cause future INSERT and UPDATE queries to error out
with the following error message:
could not read block XX of relation 1663/NNN/MMMM: read only 0 of 8192 bytes
The window to this race condition is extremely narrow, but it has been seen in
the wild involving a cancelled autovacuum process.
The solution for both problems is to inhibit interrupts in both operations
until after the respective transactions have been committed. It's not a
complete solution, because the transaction could theoretically be aborted by
some other error, but at least fixes the most common causes of both problems.
a lot of strange behaviors that occurred in join cases. We now identify the
"current" row for every joined relation in UPDATE, DELETE, and SELECT FOR
UPDATE/SHARE queries. If an EvalPlanQual recheck is necessary, we jam the
appropriate row into each scan node in the rechecking plan, forcing it to emit
only that one row. The former behavior could rescan the whole of each joined
relation for each recheck, which was terrible for performance, and what's much
worse could result in duplicated output tuples.
Also, the original implementation of EvalPlanQual could not re-use the recheck
execution tree --- it had to go through a full executor init and shutdown for
every row to be tested. To avoid this overhead, I've associated a special
runtime Param with each LockRows or ModifyTable plan node, and arranged to
make every scan node below such a node depend on that Param. Thus, by
signaling a change in that Param, the EPQ machinery can just rescan the
already-built test plan.
This patch also adds a prohibition on set-returning functions in the
targetlist of SELECT FOR UPDATE/SHARE. This is needed to avoid the
duplicate-output-tuple problem. It seems fairly reasonable since the
other restrictions on SELECT FOR UPDATE are meant to ensure that there
is a unique correspondence between source tuples and result tuples,
which an output SRF destroys as much as anything else does.
They are now handled by a new plan node type called ModifyTable, which is
placed at the top of the plan tree. In itself this change doesn't do much,
except perhaps make the handling of RETURNING lists and inherited UPDATEs a
tad less klugy. But it is necessary preparation for the intended extension of
allowing RETURNING queries inside WITH.
Marko Tiikkaja
to create a function for it.
Procedural languages now have an additional entry point, namely a function
to execute an inline code block. This seemed a better design than trying
to hide the transient-ness of the code from the PL. As of this patch, only
plpgsql has an inline handler, but probably people will soon write handlers
for the other standard PLs.
In passing, remove the long-dead LANCOMPILER option of CREATE LANGUAGE.
Petr Jelinek
This patch gets us out from under the Unix limitation of two user-defined
signal types. We already had done something similar for signals directed to
the postmaster process; this adds multiplexing for signals directed to
backends and auxiliary processes (so long as they're connected to shared
memory).
As proof of concept, replace the former usage of SIGUSR1 and SIGUSR2
for backends with use of the multiplexing mechanism. There are still some
hard-wired definitions of SIGUSR1 and SIGUSR2 for other process types,
but getting rid of those doesn't seem interesting at the moment.
Fujii Masao
The current implementation fires an AFTER ROW trigger for each tuple that
looks like it might be non-unique according to the index contents at the
time of insertion. This works well as long as there aren't many conflicts,
but won't scale to massive unique-key reassignments. Improving that case
is a TODO item.
Dean Rasheed
conindid is the index supporting a constraint. We can use this not only for
unique/primary-key constraints, but also foreign-key constraints, which
depend on the unique index that constrains the referenced columns.
tgconstrindid is just copied from the constraint's conindid field, or is
zero for triggers not associated with constraints.
This is mainly intended as infrastructure for upcoming patches, but it has
some virtue in itself, since it exposes a relationship that you formerly
had to grovel in pg_depend to determine. I simplified one information_schema
view accordingly. (There is a pg_dump query that could also use conindid,
but I left it alone because it wasn't clear it'd get any faster.)
The original syntax made it difficult to add options without making them
into reserved words. This change parenthesizes the options to avoid that
problem, and makes provision for an explicit (and perhaps non-Boolean)
value for each option. The original syntax is still supported, but only
for the two original options ANALYZE and VERBOSE.
As a test case, add a COSTS option that can suppress the planner cost
estimates. This may be useful for including EXPLAIN output in the regression
tests, which are otherwise unable to cope with cross-platform variations in
cost estimates.
Robert Haas
This alters various incidental uses of C++ key words to use other similar
identifiers, so that a C++ compiler won't choke outright. You still
(probably) need extern "C" { }; around the inclusion of backend headers.
based on a patch by Kurt Harriman <harriman@acm.org>
Also add a script cpluspluscheck to check for C++ compatibility in the
future. As of right now, this passes without error for me.
of adding optional namespace and action fields to DefElem. Having three
node types that do essentially the same thing bloats the code and leads
to errors of confusion, such as in yesterday's bug report from Khee Chin.
qualifier, and add support for this in pg_dump.
This allows TOAST tables to have user-defined fillfactor, and will also
enable us to move the autovacuum parameters to reloptions without taking
away the possibility of setting values for TOAST tables.
practically free given prior 8.4 changes in plancache and portal management,
and it makes it a lot easier for ExecutorStart/Run/End hooks to get at the
query text. Extracted from Itagaki Takahiro's pg_stat_statements patch,
with minor editorialization.
This doesn't do any remote or external things yet, but it gives modules
like plproxy and dblink a standardized and future-proof system for
managing their connection information.
Martin Pihlak and Peter Eisentraut
skipped. We could update relpages anyway, but it seems better to only
update it together with reltuples, because we use the reltuples/relpages
ratio in the planner. Also don't update n_live_tuples in pgstat.
ANALYZE in VACUUM ANALYZE now needs to update pg_class, if the
VACUUM-phase didn't do so. Added some boolean-passing to let analyze_rel
know if it should update pg_class or not.
I also moved the relcache invalidation (to update rd_targblock) from
vac_update_relstats to where RelationTruncate is called, because
vac_update_relstats is not called for partial vacuums anymore. It's more
obvious to send the invalidation close to the truncation that requires it.
Per report by Ned T. Crigler.
* Refactor explain.c slightly to export a convenient-to-use subroutine
for printing EXPLAIN results.
* Provide hooks for plugins to get control at ExecutorStart and ExecutorEnd
as well as ExecutorRun.
* Add some minimal support for tracking the total runtime of ExecutorRun.
This code won't actually do anything unless a plugin prods it to.
* Change the API of the DefineCustomXXXVariable functions to allow nonzero
"flags" to be specified for a custom GUC variable. While at it, also make
the "bootstrap" default value for custom GUCs be explicitly specified as a
parameter to these functions. This is to eliminate confusion over where the
default comes from, as has been expressed in the past by some users of the
custom-variable facility.
* Refactor GUC code a bit to ensure that a custom variable gets initialized to
something valid (like its default value) even if the placeholder value was
invalid.
recent proposal. In typical cases, we now need 12 bytes per insert or delete
event and 16 bytes per update event; previously we needed 40 bytes per
event on 32-bit hardware and 80 bytes per event on 64-bit hardware. Even
in the worst case usage pattern with a large number of distinct triggers being
fired in one query, usage is at most 32 bytes per event. It seems to be a
bit faster than the old code as well, due to reduction of palloc overhead.
This commit doesn't address the TODO item of allowing the event list to spill
to disk; rather it's trying to stave off the need for that. However, it
probably makes that task a bit easier by reducing the data structure's
dependency on pointers. It would now be practical to dump an event list to
disk by "chunks" instead of individual events.
main tables.
This requires vacuum() to accept processing a toast table standalone, so
there's a user-visible change in that it's now possible (for a superuser) to
execute "VACUUM pg_toast.pg_toast_XXX".
of different types than the underlying column. The capability isn't yet
used for anything, but will be required by upcoming patch to analyze
tsvector columns.
Jan Urbanski
corresponding struct definitions. This allows other headers to avoid including
certain highly-loaded headers such as rel.h and relscan.h, instead using just
relcache.h, heapam.h or genam.h, which are more lightweight and thus cause less
unnecessary dependencies.
grammar allows ALTER TABLE/INDEX/SEQUENCE/VIEW interchangeably for all
subforms of those commands, and then we sort out what's really legal
at execution time. This allows the ALTER SEQUENCE/VIEW reference pages
to fully document all the ALTER forms available for sequences and views
respectively, and eliminates a longstanding cause of confusion for users.
The net effect is that the following forms are allowed that weren't before:
ALTER SEQUENCE OWNER TO
ALTER VIEW ALTER COLUMN SET/DROP DEFAULT
ALTER VIEW OWNER TO
ALTER VIEW SET SCHEMA
(There's no actual functionality gain here, but formerly you had to say
ALTER TABLE instead.)
Interestingly, the grammar tables actually get smaller, probably because
there are fewer special cases to keep track of.
I did not disallow using ALTER TABLE for these operations. Perhaps we
should, but there's a backwards-compatibility issue if we do; in fact
it would break existing pg_dump scripts. I did however tighten up
ALTER SEQUENCE and ALTER VIEW to reject non-sequences and non-views
in the new cases as well as a couple of cases where they didn't before.
The patch doesn't change pg_dump to use the new syntaxes, either.
objects are specified, we drop them all in a single performMultipleDeletions
call. This makes the RESTRICT/CASCADE checks more relaxed: it's not counted
as a cascade if one of the later objects has a dependency on an earlier one.
NOTICE messages about such cases go away, too.
In passing, fix the permissions check for DROP CONVERSION, which for some
reason was never made role-aware, and omitted the namespace-owner exemption
too.
Alex Hunsaker, with further fiddling by me.
sequence to be reset to its original starting value. This requires adding the
original start value to the set of parameters (columns) of a sequence object,
which is a user-visible change with potential compatibility implications;
it also forces initdb.
Also add hopefully-SQL-compatible RESTART/CONTINUE IDENTITY options to
TRUNCATE TABLE. RESTART IDENTITY executes ALTER SEQUENCE RESTART for all
sequences "owned by" any of the truncated relations. CONTINUE IDENTITY is
a no-op option.
Zoltan Boszormenyi
inclusions in src/include/catalog/*.h files. The main idea here is to push
function declarations for src/backend/catalog/*.c files into separate headers,
rather than sticking them into the corresponding catalog definition file as
has been done in the past. This commit only carries out that idea fully for
pg_proc, pg_type and pg_conversion, but that's enough for the moment ---
if pg_list.h ever becomes unsafe for frontend code to include, we'll need
to work a bit more.
Zdenek Kotala