1
0
mirror of https://github.com/postgres/postgres.git synced 2025-06-14 18:42:34 +03:00
Commit Graph

88 Commits

Author SHA1 Message Date
cf0f08e03b Only install a portal's ResourceOwner if it actually has one.
In most scenarios a portal without a ResourceOwner is dead and not subject
to any further execution, but a portal for a cursor WITH HOLD remains in
existence with no ResourceOwner after the creating transaction is over.
In this situation, if we attempt to "execute" the portal directly to fetch
data from it, we were setting CurrentResourceOwner to NULL, leading to a
segfault if the datatype output code did anything that required a resource
owner (such as trying to fetch system catalog entries that weren't already
cached).  The case appears to be impossible to provoke with stock libpq,
but psqlODBC at least is able to cause it when working with held cursors.

Simplest fix is to just skip the assignment to CurrentResourceOwner, so
that any resources used by the data output operations will be managed by
the transaction-level resource owner instead.  For consistency I changed
all the places that install a portal's resowner as current, even though
some of them are probably not reachable with a held cursor's portal.

Per report from Joshua Berry (with thanks to Hiroshi Inoue for developing
a self-contained test case).  Back-patch to all supported versions.
2013-06-13 13:11:40 -04:00
04a0231e56 Run a portal's cleanup hook immediately when pushing it to FAILED state.
This extends the changes of commit 6252c4f9e2
so that we run the cleanup hook earlier for failure cases as well as
success cases.  As before, the point is to avoid an assertion failure from
an Assert I added in commit a874fe7b4c, which
was meant to check that no user-written code can be called during portal
cleanup.  This fixes a case reported by Pavan Deolasee in which the Assert
could be triggered during backend exit (see the new regression test case),
and also prevents the possibility that the cleanup hook is run after
portions of the portal's state have already been recycled.  That doesn't
really matter in current usage, but it foreseeably could matter in the
future.

Back-patch to 9.1 where the Assert in question was added.
2012-02-15 16:18:39 -05:00
bf50caf105 pgindent run before PG 9.1 beta 1. 2011-04-10 11:42:00 -04:00
a874fe7b4c Refactor the executor's API to support data-modifying CTEs better.
The originally committed patch for modifying CTEs didn't interact well
with EXPLAIN, as noted by myself, and also had corner-case problems with
triggers, as noted by Dean Rasheed.  Those problems show it is really not
practical for ExecutorEnd to call any user-defined code; so split the
cleanup duties out into a new function ExecutorFinish, which must be called
between the last ExecutorRun call and ExecutorEnd.  Some Asserts have been
added to these functions to help verify correct usage.

It is no longer necessary for callers of the executor to call
AfterTriggerBeginQuery/AfterTriggerEndQuery for themselves, as this is now
done by ExecutorStart/ExecutorFinish respectively.  If you really need to
suppress that and do it for yourself, pass EXEC_FLAG_SKIP_TRIGGERS to
ExecutorStart.

Also, refactor portal commit processing to allow for the possibility that
PortalDrop will invoke user-defined code.  I think this is not actually
necessary just yet, since the portal-execution-strategy logic forces any
non-pure-SELECT query to be run to completion before we will consider
committing.  But it seems like good future-proofing.
2011-02-27 13:44:12 -05:00
5d950e3b0c Stamp copyrights for year 2011. 2011-01-01 13:18:15 -05:00
9f2e211386 Remove cvs keywords from all files. 2010-09-20 22:08:53 +02:00
0239800893 Update copyright for the year 2010. 2010-01-02 16:58:17 +00:00
07cefdfb7a Fix snapshot management, take two.
Partially revert the previous patch I installed and replace it with a more
general fix: any time a snapshot is pushed as Active, we need to ensure that it
will not be modified in the future.  This means that if the same snapshot is
used as CurrentSnapshot, it needs to be copied separately.  This affects
serializable transactions only, because CurrentSnapshot has already been copied
by RegisterSnapshot and so PushActiveSnapshot does not think it needs another
copy.  However, CommandCounterIncrement would modify CurrentSnapshot, whereas
ActiveSnapshots must not have their command counters incremented.

I say "partially" because the regression test I added for the previous bug
has been kept.

(This restores 8.3 behavior, because before snapmgr.c existed, any snapshot set
as Active was copied.)

Per bug report from Stuart Bishop in
6bc73d4c0910042358k3d1adff3qa36f8df75198ecea@mail.gmail.com
2009-10-07 16:27:18 +00:00
caa4cfa369 Ensure that a cursor has an immutable snapshot throughout its lifespan.
The old coding was using a regular snapshot, referenced elsewhere, that was
subject to having its command counter updated.  Fix by creating a private copy
of the snapshot exclusively for the cursor.

Backpatch to 8.4, which is when the bug was introduced during the snapshot
management rewrite.
2009-10-02 17:57:30 +00:00
d747140279 8.4 pgindent run, with new combined Linux/FreeBSD/MinGW typedef list
provided by Andrew.
2009-06-11 14:49:15 +00:00
511db38ace Update copyright for 2009. 2009-01-01 17:24:05 +00:00
ec543db77b Ensure that the contents of a holdable cursor don't depend on out-of-line
toasted values, since those could get dropped once the cursor's transaction
is over.  Per bug #4553 from Andrew Gierth.

Back-patch as far as 8.1.  The bug actually exists back to 7.4 when holdable
cursors were introduced, but this patch won't work before 8.1 without
significant adjustments.  Given the lack of field complaints, it doesn't seem
worth the work (and risk of introducing new bugs) to try to make a patch for
the older branches.
2008-12-01 17:06:21 +00:00
c1f3073333 Clean up the API for DestReceiver objects by eliminating the assumption
that a Portal is a useful and sufficient additional argument for
CreateDestReceiver --- it just isn't, in most cases.  Instead formalize
the approach of passing any needed parameters to the receiver separately.

One unexpected benefit of this change is that we can declare typedef Portal
in a less surprising location.

This patch is just code rearrangement and doesn't change any functionality.
I'll tackle the HOLD-cursor-vs-toast problem in a follow-on patch.
2008-11-30 20:51:25 +00:00
a1c692358b Adjust things so that the query_string of a cached plan and the sourceText of
a portal are never NULL, but reliably provide the source text of the query.
It turns out that there was only one place that was really taking a short-cut,
which was the 'EXECUTE' utility statement.  That doesn't seem like a
sufficiently critical performance hotspot to justify not offering a guarantee
of validity of the portal source text.  Fix it to copy the source text over
from the cached plan.  Add Asserts in the places that set up cached plans and
portals to reject null source strings, and simplify a bunch of places that
formerly needed to guard against nulls.

There may be a few places that cons up statements for execution without
having any source text at all; I found one such in ConvertTriggerToFK().
It seems sufficient to inject a phony source string in such a case,
for instance
        ProcessUtility((Node *) atstmt,
                       "(generated ALTER TABLE ADD FOREIGN KEY command)",
                       NULL, false, None_Receiver, NULL);

We should take a second look at the usage of debug_query_string,
particularly the recently added current_query() SQL function.

ITAGAKI Takahiro and Tom Lane
2008-07-18 20:26:06 +00:00
5da9da71c4 Improve snapshot manager by keeping explicit track of snapshots.
There are two ways to track a snapshot: there's the "registered" list, which
is used for arbitrary long-lived snapshots; and there's the "active stack",
which is used for the snapshot that is considered "active" at any time.
This also allows users of snapshots to stop worrying about snapshot memory
allocation and freeing, and about using PG_TRY blocks around ActiveSnapshot
assignment.  This is all done automatically now.

As a consequence, this allows us to reset MyProc->xmin when there are no
more snapshots registered in the current backend, reducing the impact that
long-running transactions have on VACUUM.
2008-05-12 20:02:02 +00:00
1591fcbec7 Revert my bad decision of about a year ago to make PortalDefineQuery
responsible for copying the query string into the new Portal.  Such copying
is unnecessary in the common code path through exec_simple_query, and in
this case it can be enormously expensive because the string might contain
a large number of individual commands; we were copying the entire, long
string for each command, resulting in O(N^2) behavior for N commands.
(This is the cause of bug #4079.)  A second problem with it is that
PortalDefineQuery really can't risk error, because if it elog's before
having set up the Portal, we will leak the plancache refcount that the
caller is trying to hand off to the portal.  So go back to the design in
which the caller is responsible for making sure everything is copied into
the portal if necessary.
2008-04-02 18:31:50 +00:00
78f02ca1f5 Rename snapmgmt.c/h to snapmgr.c/h, for consistency with other files.
Per complaint from Tom Lane.
2008-03-26 18:48:59 +00:00
d43b085d57 Separate snapshot management code from tuple visibility code, create a
snapmgmt.c file for the former.  The header files have also been reorganized
in three parts: the most basic snapshot definitions are now in a new file
snapshot.h, and the also new snapmgmt.h keeps the definitions for snapmgmt.c.
tqual.h has been reduced to the bare minimum.

This patch is just a first step towards managing live snapshots within a
transaction; there is no functionality change.

Per my proposal to pgsql-patches on 20080318191940.GB27458@alvh.no-ip.org and
subsequent discussion.
2008-03-26 16:20:48 +00:00
8759b79d0f Add a couple of missing FreeQueryDesc calls. Noticed while testing a
framework to keep track of snapshots in use.
2008-03-20 20:05:56 +00:00
9098ab9e32 Update copyrights in source tree to 2008. 2008-01-01 19:46:01 +00:00
f6e8730d11 Re-run pgindent with updated list of typedefs. (Updated README should
avoid this problem in the future.)
2007-11-15 22:25:18 +00:00
fdf5a5efb7 pgindent run for 8.3. 2007-11-15 21:14:46 +00:00
048efc25e4 Disallow scrolling of FOR UPDATE/FOR SHARE cursors, so as to avoid problems
in corner cases such as re-fetching a just-deleted row.  We may be able to
relax this someday, but let's find out how many people really care before
we invest a lot of work in it.  Per report from Heikki and subsequent
discussion.

While in the neighborhood, make the combination of INSENSITIVE and FOR UPDATE
throw an error, since they are semantically incompatible.  (Up to now we've
accepted but just ignored the INSENSITIVE option of DECLARE CURSOR.)
2007-10-24 23:27:08 +00:00
bbbe825f5f Modify processing of DECLARE CURSOR and EXPLAIN so that they can resolve the
types of unspecified parameters when submitted via extended query protocol.
This worked in 8.2 but I had broken it during plancache changes.  DECLARE
CURSOR is now treated almost exactly like a plain SELECT through parse
analysis, rewrite, and planning; only just before sending to the executor
do we divert it away to ProcessUtility.  This requires a special-case check
in a number of places, but practically all of them were already special-casing
SELECT INTO, so it's not too ugly.  (Maybe it would be a good idea to merge
the two by treating IntoClause as a form of utility statement?  Not going to
worry about that now, though.)  That approach doesn't work for EXPLAIN,
however, so for that I punted and used a klugy solution of running parse
analysis an extra time if under extended query protocol.
2007-04-27 22:05:49 +00:00
66888f7424 Expose more cursor-related functionality in SPI: specifically, allow
access to the planner's cursor-related planning options, and provide new
FETCH/MOVE routines that allow access to the full power of those commands.
Small refactoring of planner(), pg_plan_query(), and pg_plan_queries()
APIs to make it convenient to pass the planning options down from SPI.

This is the core-code portion of Pavel Stehule's patch for scrollable
cursor support in plpgsql; I'll review and apply the plpgsql changes
separately.
2007-04-16 01:14:58 +00:00
d13e903bea RESET SESSION, plus related new DDL commands. Patch from Marko Kreen,
reviewed by Neil Conway. This patch adds the following DDL command
variants: RESET SESSION, RESET TEMP, RESET PLANS, CLOSE ALL, and
DEALLOCATE ALL. RESET SESSION is intended for use by connection
pool software and the like, in order to reset a client session
to something close to its initial state.

Note that while most of these command variants can be executed
inside a transaction block (but are not transaction-aware!),
RESET SESSION cannot. While this is inconsistent, it is intended
to catch programmer mistakes: RESET SESSION in an open transaction
block is probably unintended.
2007-04-12 06:53:49 +00:00
b9527e9840 First phase of plan-invalidation project: create a plan cache management
module and teach PREPARE and protocol-level prepared statements to use it.
In service of this, rearrange utility-statement processing so that parse
analysis does not assume table schemas can't change before execution for
utility statements (necessary because we don't attempt to re-acquire locks
for utility statements when reusing a stored plan).  This requires some
refactoring of the ProcessUtility API, but it ends up cleaner anyway,
for instance we can get rid of the QueryContext global.

Still to do: fix up SPI and related code to use the plan cache; I'm tempted to
try to make SQL functions use it too.  Also, there are at least some aspects
of system state that we want to ensure remain the same during a replan as in
the original processing; search_path certainly ought to behave that way for
instance, and perhaps there are others.
2007-03-13 00:33:44 +00:00
9cbd0c155d Remove the Query structure from the executor's API. This allows us to stop
storing mostly-redundant Query trees in prepared statements, portals, etc.
To replace Query, a new node type called PlannedStmt is inserted by the
planner at the top of a completed plan tree; this carries just the fields of
Query that are still needed at runtime.  The statement lists kept in portals
etc. now consist of intermixed PlannedStmt and bare utility-statement nodes
--- no Query.  This incidentally allows us to remove some fields from Query
and Plan nodes that shouldn't have been there in the first place.

Still to do: simplify the execution-time range table; at the moment the
range table passed to the executor still contains Query trees for subqueries.

initdb forced due to change of stored rules.
2007-02-20 17:32:18 +00:00
5b706ba481 Fix an error in the original coding of holdable cursors: PersistHoldablePortal
thought that it didn't have to reposition the underlying tuplestore if the
portal is atEnd.  But this is not so, because tuplestores have separate read
and write cursors ... and the read cursor hasn't moved from the start.
This mistake explains bug #2970 from William Zhang.

Note: the coding here is pretty inefficient, but given that no one has noticed
this bug until now, I'd say hardly anyone uses the case where the cursor has
been advanced before being persisted.  So maybe it's not worth worrying about.
2007-02-06 22:49:24 +00:00
8b4ff8b6a1 Wording cleanup for error messages. Also change can't -> cannot.
Standard English uses "may", "can", and "might" in different ways:

        may - permission, "You may borrow my rake."

        can - ability, "I can lift that log."

        might - possibility, "It might rain today."

Unfortunately, in conversational English, their use is often mixed, as
in, "You may use this variable to do X", when in fact, "can" is a better
choice.  Similarly, "It may crash" is better stated, "It might crash".
2007-02-01 19:10:30 +00:00
29dccf5fe0 Update CVS HEAD for 2007 copyright. Back branches are typically not
back-stamped for this.
2007-01-05 22:20:05 +00:00
2315df21f8 Fix a couple of places that were assuming debug_query_string couldn't
be NULL ... seems an unsafe assumption.
2006-10-19 19:53:03 +00:00
f99a569a2e pgindent run for 8.2. 2006-10-04 00:30:14 +00:00
893632be4e Clean up logging for extended-query-protocol operations, as per my recent
proposal.  Parameter logging works even for binary-format parameters, and
logging overhead is avoided when disabled.

log_statement = all output for the src/test/examples/testlibpq3.c example
now looks like

LOG:  statement: execute <unnamed>: SELECT * FROM test1 WHERE t = $1
DETAIL:  parameters: $1 = 'joe''s place'
LOG:  statement: execute <unnamed>: SELECT * FROM test1 WHERE i = $1::int4
DETAIL:  parameters: $1 = '2'

and log_min_duration_statement = 0 results in

LOG:  duration: 2.431 ms  parse <unnamed>: SELECT * FROM test1 WHERE t = $1
LOG:  duration: 2.335 ms  bind <unnamed> to <unnamed>: SELECT * FROM test1 WHERE t = $1
DETAIL:  parameters: $1 = 'joe''s place'
LOG:  duration: 0.394 ms  execute <unnamed>: SELECT * FROM test1 WHERE t = $1
DETAIL:  parameters: $1 = 'joe''s place'
LOG:  duration: 1.251 ms  parse <unnamed>: SELECT * FROM test1 WHERE i = $1::int4
LOG:  duration: 0.566 ms  bind <unnamed> to <unnamed>: SELECT * FROM test1 WHERE i = $1::int4
DETAIL:  parameters: $1 = '2'
LOG:  duration: 0.173 ms  execute <unnamed>: SELECT * FROM test1 WHERE i = $1::int4
DETAIL:  parameters: $1 = '2'

(This example demonstrates the folly of ignoring parse/bind steps for duration
logging purposes, BTW.)

Along the way, create a less ad-hoc mechanism for determining which commands
are logged by log_statement = mod and log_statement = ddl.  The former coding
was actually missing quite a few things that look like ddl to me, and it
did not handle EXECUTE or extended query protocol correctly at all.

This commit does not do anything about the question of whether log_duration
should be removed or made less redundant with log_min_duration_statement.
2006-09-07 22:52:01 +00:00
5983a1aaa9 Change processing of extended-Query mode so that an unnamed statement
that has parameters is always planned afresh for each Bind command,
treating the parameter values as constants in the planner.  This removes
the performance penalty formerly often paid for using out-of-line
parameters --- with this definition, the planner can do constant folding,
LIKE optimization, etc.  After a suggestion by Andrew@supernews.
2006-09-06 20:40:48 +00:00
0e20c48561 Revert FETCH/MOVE int64 patch. Was using incorrect checks for
fetch/move in scan.l.
2006-09-03 03:19:45 +00:00
6c785d599d Change FETCH/MOVE to use int8.
Dhanaraj M
2006-09-02 18:17:18 +00:00
bc24d5b976 Now bind displays prepare as detail, and execute displays prepare and
optionally bind.  I re-added the "statement:" label so people will
understand why the line is being printed (it is log_*statement
behavior).

Use single quotes for bind values, instead of double quotes, and double
literal single quotes in bind values (and document that).  I also made
use of the DETAIL line to have much cleaner output.
2006-08-29 02:11:30 +00:00
3f8db37c2f Tweak SPI_cursor_open to allow INSERT/UPDATE/DELETE RETURNING; this was
merely a matter of fixing the error check, since the underlying Portal
infrastructure already handles it.  This in turn allows these statements
to be used in some existing plpgsql and plperl contexts, such as a
plpgsql FOR loop.  Also, do some marginal code cleanup in places that
were being sloppy about distinguishing SELECT from SELECT INTO.
2006-08-12 20:05:56 +00:00
3716f90c39 For protocol-level prepare/bind/execute:
o  print user name for all
	o  print portal name if defined for all
	o  print query for all
	o  reduce log_statement header to single keyword
	o  print bind parameters as DETAIL if text mode
2006-08-08 01:23:15 +00:00
a22d76d96a Allow include files to compile own their own.
Strip unused include files out unused include files, and add needed
includes to C files.

The next step is to remove unused include files in C files.
2006-07-13 16:49:20 +00:00
3f50ba27cf Create infrastructure for 'MinimalTuple' representation of in-memory
tuples with less header overhead than a regular HeapTuple, per my
recent proposal.  Teach TupleTableSlot code how to deal with these.
As proof of concept, change tuplestore.c to store MinimalTuples instead
of HeapTuples.  Future patches will expand the concept to other places
where it is useful.
2006-06-27 02:51:40 +00:00
f2f5b05655 Update copyright for 2006. Update scripts. 2006-03-05 15:59:11 +00:00
33e06ebccb Add a new system view, pg_cursors, that displays the currently available
cursors. Patch from Joachim Wieland, review and ediorialization by Neil
Conway. The view lists cursors defined by DECLARE CURSOR, using SPI, or
via the Bind message of the frontend/backend protocol. This means the
view does not list the unnamed portal or the portal created to implement
EXECUTE. Because we do list SPI portals, there might be more rows in
this view than you might expect if you are using SPI implicitly (e.g.
via a procedural language).

Per recent discussion on -hackers, the query string included in the
view for cursors defined by DECLARE CURSOR is based on
debug_query_string. That means it is not accurate if multiple queries
separated by semicolons are submitted as one query string. However,
there doesn't seem a trivial fix for that: debug_query_string
is better than nothing. I also changed SPI_cursor_open() to include
the source text for the portal it creates: AFAICS there is no reason
not to do this.

Update the documentation and regression tests, bump the catversion.
2006-01-18 06:49:30 +00:00
902377c465 Rename the members of CommandDest enum so they don't collide with other uses of
those names.  (Debug and None were pretty bad names anyway.)  I hope I catched
all uses of the names in comments too.
2005-11-03 17:11:40 +00:00
1dc3498251 Standard pgindent run for 8.1. 2005-10-15 02:49:52 +00:00
ba42002461 Revise handling of dropped columns in JOIN alias lists to avoid a
performance problem pointed out by phil@vodafone: to wit, we were
spending O(N^2) time to check dropped-ness in an N-deep join tree,
even in the case where the tree was freshly constructed and couldn't
possibly mention any dropped columns.  Instead of recursing in
get_rte_attribute_is_dropped(), change the data structure definition:
the joinaliasvars list of a JOIN RTE must have a NULL Const instead
of a Var at any position that references a now-dropped column.  This
costs nothing during normal parse-rewrite-plan path, and instead we
have a linear-time update to make when loading a stored rule that
might contain now-dropped columns.  While at it, move the responsibility
for acquring locks on relations referenced by rules into this separate
function (which I therefore chose to call AcquireRewriteLocks).
This saves effort --- namely, duplicated lock grabs in parser and rewriter
--- in the normal path at a cost of one extra non-locked heap_open()
in the stored-rule path; seems a good tradeoff.  A fringe benefit is
that it is now *much* clearer that we acquire lock on relations referenced
in rules before we make any rewriter decisions based on their properties.
(I don't know of any bug of that ilk, but it wasn't exactly clear before.)
2005-06-03 23:05:30 +00:00
bedb78d386 Implement sharable row-level locks, and use them for foreign key references
to eliminate unnecessary deadlocks.  This commit adds SELECT ... FOR SHARE
paralleling SELECT ... FOR UPDATE.  The implementation uses a new SLRU
data structure (managed much like pg_subtrans) to represent multiple-
transaction-ID sets.  When more than one transaction is holding a shared
lock on a particular row, we create a MultiXactId representing that set
of transactions and store its ID in the row's XMAX.  This scheme allows
an effectively unlimited number of row locks, just as we did before,
while not costing any extra overhead except when a shared lock actually
has to be shared.   Still TODO: use the regular lock manager to control
the grant order when multiple backends are waiting for a row lock.

Alvaro Herrera and Tom Lane.
2005-04-28 21:47:18 +00:00
0c400f1bbc PersistHoldablePortal must establish the correct value for ActiveSnapshot
while completing execution of the cursor's query.  Otherwise we get wrong
answers or even crashes from non-volatile functions called by the query.
Per report from andrew@supernews.
2005-04-11 15:59:34 +00:00
adb1a6e95b Improve EXPLAIN ANALYZE to show the time spent in each trigger when
executing a statement that fires triggers.  Formerly this time was
included in "Total runtime" but not otherwise accounted for.
As a side benefit, we avoid re-opening relations when firing non-deferred
AFTER triggers, because the trigger code can re-use the main executor's
ResultRelInfo data structure.
2005-03-25 21:58:00 +00:00