1
0
mirror of https://github.com/postgres/postgres.git synced 2025-12-12 02:37:31 +03:00
Commit Graph

94 Commits

Author SHA1 Message Date
Peter Eisentraut
056a5a3f63 Allow committing inside cursor loop
Previously, committing or aborting inside a cursor loop was prohibited
because that would close and remove the cursor.  To allow that,
automatically convert such cursors to holdable cursors so they survive
commits or rollbacks.  Portals now have a new state "auto-held", which
means they have been converted automatically from pinned.  An auto-held
portal is kept on transaction commit or rollback, but is still removed
when returning to the main loop on error.

This supports all languages that have cursor loop constructs: PL/pgSQL,
PL/Python, PL/Perl.

Reviewed-by: Ildus Kurbangaliev <i.kurbangaliev@postgrespro.ru>
2018-03-28 19:03:26 -04:00
Peter Eisentraut
33803f67f1 Support INOUT arguments in procedures
In a top-level CALL, the values of INOUT arguments will be returned as a
result row.  In PL/pgSQL, the values are assigned back to the input
arguments.  In other languages, the same convention as for return a
record from a function is used.  That does not require any code changes
in the PL implementations.

Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
2018-03-14 12:07:28 -04:00
Tom Lane
e748e902de Fix broken logic for reporting PL/Python function names in errcontext.
plpython_error_callback() reported the name of the function associated
with the topmost PL/Python execution context.  This was not merely
wrong if there were nested PL/Python contexts, but it risked a core
dump if the topmost one is an inline code block rather than a named
function.  That will have proname = NULL, and so we were passing a NULL
pointer to snprintf("%s").  It seems that none of the PL/Python-testing
machines in the buildfarm will dump core for that, but some platforms do,
as reported by Marina Polyakova.

Investigation finds that there actually is an existing regression test
that used to prove that the behavior was wrong, though apparently no one
had noticed that it was printing the wrong function name.  It stopped
showing the problem in 9.6 when we adjusted psql to not print CONTEXT
by default for NOTICE messages.  The problem is masked (if your platform
avoids the core dump) in error cases, because PL/Python will throw away
the originally generated error info in favor of a new traceback produced
at the outer level.

Repair by using ErrorContextCallback.arg to pass the correct context to
the error callback.  Add a regression test illustrating correct behavior.

Back-patch to all supported branches, since they're all broken this way.

Discussion: https://postgr.es/m/156b989dbc6fe7c4d3223cf51da61195@postgrespro.ru
2018-02-14 14:47:18 -05:00
Peter Eisentraut
f498704346 PL/Python: Fix tests for older Python versions
Commit 8561e4840c neglected to handle
older Python versions that don't support the "with" statement.  So write
the tests in a way that older versions can handle as well.
2018-01-22 12:09:52 -05:00
Peter Eisentraut
8561e4840c Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language.  Add similar underlying functions to SPI.  Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call.  Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.

- SPI

Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags.  The only option flag right now is
SPI_OPT_NONATOMIC.  A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed.  This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called.  A nonatomic SPI connection uses different
memory management.  A normal SPI connection allocates its memory in
TopTransactionContext.  For nonatomic connections we use PortalContext
instead.  As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.

SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.

- portalmem.c

Some adjustments were made in the code that cleans up portals at
transaction abort.  The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.

In AtAbort_Portals(), remove the code that marks an active portal as
failed.  As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort.  And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception.  So the code in AtAbort_Portals() is never used
anyway.

In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much.  This mirrors similar code in
PreCommit_Portals().

- PL/Perl

Gets new functions spi_commit() and spi_rollback()

- PL/pgSQL

Gets new commands COMMIT and ROLLBACK.

Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.

- PL/Python

Gets new functions plpy.commit and plpy.rollback.

- PL/Tcl

Gets new commands commit and rollback.

Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 08:43:06 -05:00
Peter Eisentraut
e4128ee767 SQL procedures
This adds a new object type "procedure" that is similar to a function
but does not have a return type and is invoked by the new CALL statement
instead of SELECT or similar.  This implementation is aligned with the
SQL standard and compatible with or similar to other SQL implementations.

This commit adds new commands CALL, CREATE/ALTER/DROP PROCEDURE, as well
as ALTER/DROP ROUTINE that can refer to either a function or a
procedure (or an aggregate function, as an extension to SQL).  There is
also support for procedures in various utility commands such as COMMENT
and GRANT, as well as support in pg_dump and psql.  Support for defining
procedures is available in all the languages supplied by the core
distribution.

While this commit is mainly syntax sugar around existing functionality,
future features will rely on having procedures as a separate object
type.

Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2017-11-30 11:03:20 -05:00
Tom Lane
687f096ea9 Make PL/Python handle domain-type conversions correctly.
Fix PL/Python so that it can handle domains over composite, and so that
it enforces domain constraints correctly in other cases that were not
always done properly before.  Notably, it didn't do arrays of domains
right (oversight in commit c12d570fa), and it failed to enforce domain
constraints when returning a composite type containing a domain field,
and if a transform function is being used for a domain's base type then
it failed to enforce domain constraints on the result.  Also, in many
places it missed checking domain constraints on null values, because
the plpy_typeio code simply wasn't called for Py_None.

Rather than try to band-aid these problems, I made a significant
refactoring of the plpy_typeio logic.  The existing design of recursing
for array and composite members is extended to also treat domains as
containers requiring recursion, and the APIs for the module are cleaned
up and simplified.

The patch also modifies plpy_typeio to rely on the typcache more than
it did before (which was pretty much not at all).  This reduces the
need for repetitive lookups, and lets us get rid of an ad-hoc scheme
for detecting changes in composite types.  I added a couple of small
features to typcache to help with that.

Although some of this is fixing bugs that long predate v11, I don't
think we should risk a back-patch: it's a significant amount of code
churn, and there've been no complaints from the field about the bugs.

Tom Lane, reviewed by Anthony Bykov

Discussion: https://postgr.es/m/24449.1509393613@sss.pgh.pa.us
2017-11-16 16:23:04 -05:00
Kevin Grittner
5ebeb579b9 Follow-on cleanup for the transition table patch.
Commit 59702716 added transition table support to PL/pgsql so that
SQL queries in trigger functions could access those transient
tables.  In order to provide the same level of support for PL/perl,
PL/python and PL/tcl, refactor the relevant code into a new
function SPI_register_trigger_data.  Call the new function in the
trigger handler of all four PLs, and document it as a public SPI
function so that authors of out-of-tree PLs can do the same.

Also get rid of a second QueryEnvironment object that was
maintained by PL/pgsql.  That was previously used to deal with
cursors, but the same approach wasn't appropriate for PLs that are
less tangled up with core code.  Instead, have SPI_cursor_open
install the connection's current QueryEnvironment, as already
happens for SPI_execute_plan.

While in the docs, remove the note that transition tables were only
supported in C and PL/pgSQL triggers, and correct some ommissions.

Thomas Munro with some work by Kevin Grittner (mostly docs)
2017-04-04 18:36:39 -05:00
Peter Eisentraut
70ec3f1f8f PL/Python: Add cursor and execute methods to plan object
Instead of

    plan = plpy.prepare(...)
    res = plpy.execute(plan, ...)

you can now write

    plan = plpy.prepare(...)
    res = plan.execute(...)

or even

    res = plpy.prepare(...).execute(...)

and similarly for the cursor() method.

This is more in object oriented style, and makes the hybrid nature of
the existing execute() function less confusing.

Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2017-03-27 11:37:22 -04:00
Peter Eisentraut
04aad40186 Drop support for Python 2.3
There is no specific reason for this right now, but keeping support for
old Python versions around indefinitely increases the maintenance
burden.  The oldest supported Python version is now Python 2.4, which is
still shipped in RHEL/CentOS 5 by default.

In configure, add a check for the required Python version and give a
friendly error message for an old version, instead of relying on an
obscure build error later on.
2017-02-21 09:49:22 -05:00
Peter Eisentraut
84d457edaf Format PL/Python module contents test vertically
It makes it readable again and makes merges more manageable.
2016-10-27 15:41:29 -04:00
Heikki Linnakangas
8eb6337f9f Remove platform-dependent PL/python test case.
Turns out that the output format of Python Decimal isn't totally platform-
independent either. There are other tests for multi-dimensional arrays,
so rather than try to fix this test case, just remove it.

Per buildfarm member prairiedog.
2016-10-26 17:09:18 +03:00
Heikki Linnakangas
73c8e8506c Avoid using platform-dependent floats in test case.
The number of decimals printed for floats varied in this test case, as
noted by several buildfarm members. There's nothing special about floats
and arrays in the code being tested, so replace the floats with numerics to
make the output platform-independent.
2016-10-26 14:17:07 +03:00
Heikki Linnakangas
510e1b8ecf Give a hint, when [] is incorrectly used for a composite type in array.
That used to be accepted, so let's try to give a hint to users on why
their PL/python functions no longer work.

Reviewed by Pavel Stehule.

Discussion: <CAH38_tmbqwaUyKs9yagyRra=SMaT45FPBxk1pmTYcM0TyXGG7Q@mail.gmail.com>
2016-10-26 10:56:56 +03:00
Heikki Linnakangas
94aceed317 Support multi-dimensional arrays in PL/python.
Multi-dimensional arrays can now be used as arguments to a PL/python function
(used to throw an error), and they can be returned as nested Python lists.

This makes a backwards-incompatible change to the handling of composite
types in arrays. Previously, you could return an array of composite types
as "[[col1, col2], [col1, col2]]", but now that is interpreted as a two-
dimensional array. Composite types in arrays must now be returned as
Python tuples, not lists, to resolve the ambiguity. I.e. "[(col1, col2),
(col1, col2)]".

To avoid breaking backwards-compatibility, when not necessary, () is still
accepted for arrays at the top-level, but it is always treated as a
single-dimensional array. Likewise, [] is still accepted for composite types,
when they are not in an array. Update the documentation to recommend using []
for arrays, and () for composite types, with a mention that those other things
are also accepted in some contexts.

This needs to be mentioned in the release notes.

Alexey Grishchenko, Dave Cramer and me. Reviewed by Pavel Stehule.

Discussion: <CAH38_tmbqwaUyKs9yagyRra=SMaT45FPBxk1pmTYcM0TyXGG7Q@mail.gmail.com>
2016-10-26 10:56:30 +03:00
Andres Freund
ccbb852cd6 Fix further hash table order dependent tests.
Similar to 0137caf273, this makes contrib and pl tests less dependant on
hash-table order.  After this commit, at least some order affecting
changes to execGrouping.c don't result in regression test changes
anymore.
2016-10-12 18:31:45 -07:00
Peter Eisentraut
f0688d6e6c PL/Python: Clean up extended error reporting docs and tests
Format the example and test code more to Python style standards.
Improve whitespace.  Improve documentation formatting.
2016-06-15 10:34:11 -04:00
Peter Eisentraut
020140d84d PL/Python: Rename new keyword arguments of plpy.error() etc.
Rename schema -> schema_name etc. to remain consistent with C API and
PL/pgSQL.
2016-06-11 19:27:49 -04:00
Peter Eisentraut
8359077124 PL/Python: Move ereport wrapper test cases to separate file
In commit 5c3c3cd0a3, the new tests were
apparently just dumped into the first convenient file.  Move them to a
separate file dedicated to testing that functionality and leave the
plpython_test test to test basic functionality, as it did before.
2016-06-07 09:39:11 -04:00
Tom Lane
c7a141a986 Fix PL/Python ereport() test to work on Python 2.3.
Per buildfarm.

Pavel Stehule
2016-04-09 16:44:54 -04:00
Teodor Sigaev
5c3c3cd0a3 Enhanced custom error in PLPythonu
Patch adds a new, more rich,  way to emit error message or exception from
PL/Pythonu code.

Author: Pavel Stehule
Reviewers: Catalin Iacob, Peter Eisentraut, Jim Nasby
2016-04-08 18:33:06 +03:00
Tom Lane
1d2fe56e42 Fix PL/Python for recursion and interleaved set-returning functions.
PL/Python failed if a PL/Python function was invoked recursively via SPI,
since arguments are passed to the function in its global dictionary
(a horrible decision that's far too ancient to undo) and it would delete
those dictionary entries on function exit, leaving the outer recursion
level(s) without any arguments.  Not deleting them would be little better,
since the outer levels would then see the innermost level's arguments.

Since PL/Python uses ValuePerCall mode for evaluating set-returning
functions, it's possible for multiple executions of the same SRF to be
interleaved within a query.  PL/Python failed in such a case, because
it stored only one iterator per function, directly in the function's
PLyProcedure struct.  Moreover, one interleaved instance of the SRF
would see argument values that should belong to another.

Hence, invent code for saving and restoring the argument entries.  To fix
the recursion case, we only need to save at recursive entry and restore
at recursive exit, so the overhead in non-recursive cases is negligible.
To fix the SRF case, we have to save when suspending a SRF and restore
when resuming it, which is potentially not negligible; but fortunately
this is mostly a matter of manipulating Python object refcounts and
should not involve much physical data copying.

Also, store the Python iterator and saved argument values in a structure
associated with the SRF call site rather than the function itself.  This
requires adding a memory context deletion callback to ensure that the SRF
state is cleaned up if the calling query exits before running the SRF to
completion.  Without that we'd leak a refcount to the iterator object in
such a case, resulting in session-lifespan memory leakage.  (In the
pre-existing code, there was no memory leak because there was only one
iterator pointer, but what would happen is that the previous iterator
would be resumed by the next query attempting to use the SRF.  Hardly the
semantics we want.)

We can buy back some of whatever overhead we've added by getting rid of
PLy_function_delete_args(), which seems a useless activity: there is no
need to delete argument entries from the global dictionary on exit,
since the next time anyone would see the global dict is on the next
fresh call of the PL/Python function, at which time we'd overwrite those
entries with new arg values anyway.

Also clean up some really ugly coding in the SRF implementation, including
such gems as returning directly out of a PG_TRY block.  (The only reason
that failed to crash hard was that all existing call sites immediately
exited their own PG_TRY blocks, popping the dangling longjmp pointer before
there was any chance of it being used.)

In principle this is a bug fix; but it seems a bit too invasive relative to
its value for a back-patch, and besides the fix depends on memory context
callbacks so it could not go back further than 9.5 anyway.

Alexey Grishchenko and Tom Lane
2016-04-05 14:51:19 -04:00
Tom Lane
66f503868b Make plpython cope with funny characters in function names.
A function name that's double-quoted in SQL can contain almost any
characters, but we were using that name directly as part of the name
generated for the Python-level function, and Python doesn't like
anything that isn't pretty much a standard identifier.  To fix,
replace anything that isn't an ASCII letter or digit with an underscore
in the generated name.  This doesn't create any risk of duplicate Python
function names because we were already appending the function OID to
the generated name to ensure uniqueness.  Per bug #13960 from Jim Nasby.

Patch by Jim Nasby, modified a bit by me.  Back-patch to all
supported branches.
2016-02-16 21:08:15 -05:00
Tom Lane
f469f634ad Fix plpython crash when returning string representation of a RECORD result.
PLyString_ToComposite() blithely overwrote proc->result.out.d, even though
for a composite result type the other union variant proc->result.out.r is
the one that should be valid.  This could result in a crash if out.r had
in fact been filled in (proc->result.is_rowtype == 1) and then somebody
later attempted to use that data; as per bug #13579 from Paweł Michalak.

Just to add insult to injury, it didn't work for RECORD results anyway,
because record_in() would refuse the case.

Fix by doing the I/O function lookup in a local PLyTypeInfo variable,
as we were doing already in PLyObject_ToComposite().  This is not a great
technique because any fn_extra data allocated by the input function will
be leaked permanently (thanks to using TopMemoryContext as fn_mcxt).
But that's a pre-existing issue that is much less serious than a crash,
so leave it to be fixed separately.

This bug would be a potential security issue, except that plpython is
only available to superusers and the crash requires coding the function
in a way that didn't work before today's patches.

Add regression test cases covering all the supported methods of converting
composite results.

Back-patch to 9.1 where the faulty coding was introduced.
2015-08-21 12:21:37 -04:00
Peter Eisentraut
f16d52269a PL/Python: Make tests pass with Python 3.5
The error message wording for AttributeError has changed in Python 3.5.
For the plpython_error test, add a new expected file.  In the
plpython_subtransaction test, we didn't really care what the exception
is, only that it is something coming from Python.  So use a generic
exception instead, which has a message that doesn't vary across
versions.
2015-08-13 23:55:20 -04:00
Peter Eisentraut
1ce7a57ca6 PL/Python: Avoid lossiness in float conversion
PL/Python uses str() to convert Python values back to PostgreSQL, but
str() is lossy for float values, so use repr() instead in that case.

Author: Marko Kreen <markokr@gmail.com>
2015-03-11 15:46:06 -04:00
Tom Lane
8b6010b835 Improve support for composite types in PL/Python.
Allow PL/Python functions to return arrays of composite types.
Also, fix the restriction that plpy.prepare/plpy.execute couldn't
handle query parameters or result columns of composite types.

In passing, adopt a saner arrangement for where to release the
tupledesc reference counts acquired via lookup_rowtype_tupdesc.
The callers of PLyObject_ToCompositeDatum were doing the lookups,
but then the releases happened somewhere down inside subroutines
of PLyObject_ToCompositeDatum, which is bizarre and bug-prone.
Instead release in the same function that acquires the refcount.

Ed Behn and Ronan Dunklau, reviewed by Abhijit Menon-Sen
2014-07-03 16:10:50 -04:00
Tom Lane
2dfa15de55 Make plpython_unicode regression test work in more database encodings.
This test previously used a data value containing U+0080, and would
therefore fail if the database encoding didn't have an equivalent to
that; which only about half of our supported server encodings do.
We could fall back to using some plain-ASCII character, but that seems
like it's losing most of the point of the test.  Instead switch to using
U+00A0 (no-break space), which translates into all our supported encodings
except the four in the EUC_xx family.

Per buildfarm testing.  Back-patch to 9.1, which is as far back as this
test is expected to succeed everywhere.  (9.0 has the test, but without
back-patching some 9.1 code changes we could not expect to get consistent
results across platforms anyway.)
2014-06-03 12:01:54 -04:00
Peter Eisentraut
d0765d50f4 PL/Python: Adjust the regression tests for Python 3.4
The error test case in the plpython_do test resulted in a slightly
different error message with Python 3.4.  So pick a different way to
test it that avoids that and is perhaps also a bit clearer.
2014-04-29 22:16:16 -04:00
Tom Lane
6bff0e7d92 Add a regression test case for plpython function returning setof RECORD.
We had coverage for functions returning setof a named composite type,
but not for anonymous records, which is a somewhat different code path.
In view of recent crash report from Sergey Konoplev, this seems worth
testing, though I doubt there's any deterministic bug here today.
2013-12-11 17:22:55 -05:00
Heikki Linnakangas
37364c6311 Handle domains over arrays like plain arrays in PL/python.
Domains over arrays are now converted to/from python lists when passed as
arguments or return values. Like regular arrays.

This has some potential to break applications that rely on the old behavior
that they are passed as strings, but in practice there probably aren't many
such applications out there.

Rodolfo Campero
2013-11-26 14:33:31 +02:00
Peter Eisentraut
527ea66849 PL/Python: Adjust the regression tests for Python 3.3
Similar to 2cfb1c6f77, the order in which
dictionary elements are printed is not reliable.  This reappeared in the
tests of the string representation of result objects.  Reduce the test
case to one result set column so that there is no question of order.
2013-08-11 09:20:16 -04:00
Peter Eisentraut
8182ffde5a PL/Python: Make regression tests pass with older Python versions
Avoid output formatting differences by printing str() instead of repr()
of the value.
2013-07-06 20:37:58 -04:00
Peter Eisentraut
7919398bac PL/Python: Convert numeric to Decimal
The old implementation converted PostgreSQL numeric to Python float,
which was always considered a shortcoming.  Now numeric is converted to
the Python Decimal object.  Either the external cdecimal module or the
standard library decimal module are supported.

From: Szymon Guz <mabewlun@gmail.com>
From: Ronan Dunklau <rdunklau@gmail.com>
Reviewed-by: Steve Singer <steve@ssinger.info>
2013-07-05 22:41:25 -04:00
Peter Eisentraut
330ed4ac6c PL/Python: Add result object str handler
This is intended so that say plpy.debug(rv) prints something useful for
debugging query execution results.

reviewed by Steve Singer
2013-02-03 00:31:01 -05:00
Heikki Linnakangas
316186f289 Handle SPIErrors raised directly in PL/Python code.
If a PL/Python function raises an SPIError (or one if its subclasses)
directly with python's raise statement, treat it the same as an SPIError
generated internally. In particular, if the user sets the sqlstate
attribute, preserve that.

Oskari Saarenmaa and Jan Urbański, reviewed by Karl O. Pinc.
2013-01-28 09:46:23 +02:00
Tom Lane
08be00fabe Fix plpython's handling of functions used as triggers on multiple tables.
plpython tried to use a single cache entry for a trigger function, but it
needs a separate cache entry for each table the trigger is applied to,
because there is table-dependent data in there.  This was done correctly
before 9.1, but commit 46211da1b8 broke it
by simplifying the lookup key from "function OID and triggered table OID"
to "function OID and is-trigger boolean".  Go back to using both OIDs
as the lookup key.  Per bug report from Sandro Santilli.

Andres Freund
2013-01-25 16:59:36 -05:00
Peter Eisentraut
db0af74af2 PL/Python: Convert oid to long/int
oid is a numeric type, so transform it to the appropriate Python
numeric type like the other ones.
2012-09-29 12:41:00 -04:00
Tom Lane
45d1f1e024 Adjust PL/Python regression tests some more for Python 3.3.
Commit 2cfb1c6f77 fixed some issues caused
by Python 3.3 choosing to iterate through dict entries in a different order
than before.  But here's another one: the test cases adjusted here made two
bad entries in a dict and expected the one complained of would always be
the same.

Possibly this should be back-patched further than 9.2, but there seems
little point unless the earlier fix is too.
2012-09-08 17:39:02 -04:00
Peter Eisentraut
2cfb1c6f77 PL/Python: Adjust the regression tests for Python 3.3
The string representation of ImportError changed.  Remove printing
that; it's not necessary for the test.

The order in which members of a dict are printed changed.  But this
was always implementation-dependent, so we have just been lucky for a
long time.  Do the printing the hard way to ensure sorted order.
2012-05-11 23:04:47 +03:00
Peter Eisentraut
a97207b690 PL/Python: Fix slicing support for result objects for Python 3
The old way of implementing slicing support by implementing
PySequenceMethods.sq_slice no longer works in Python 3.  You now have
to implement PyMappingMethods.mp_subscript.  Do this by simply
proxying the call to the wrapped list of result dictionaries.
Consolidate some of the subscripting regression tests.

Jan Urbański
2012-05-10 20:40:30 +03:00
Peter Eisentraut
e6c2e8cb87 PL/Python: Improve test coverage
Add test cases for inline handler of plython2u (when using that
language name), and for result object element assignment.  There is
now at least one test case for every top-level functionality, except
plpy.Fatal (annoying to use in regression tests) and result object
slice retrieval and slice assignment (which are somewhat broken).
2012-05-02 21:09:03 +03:00
Peter Eisentraut
52aa334fcd PL/Python: Fix crash in functions returning SETOF and using SPI
Allocate PLyResultObject.tupdesc in TopMemoryContext, because its
lifetime is the lifetime of the Python object and it shouldn't be
freed by some other memory context, such as one controlled by SPI.  We
trust that the Python object will clean up its own memory.

Before, this would crash the included regression test case by trying
to use memory that was already freed.

reported by Asif Naeem, analysis by Tom Lane
2012-05-02 20:59:51 +03:00
Peter Eisentraut
ba3e4157a7 PL/Python: Accept strings in functions returning composite types
Before 9.1, PL/Python functions returning composite types could return
a string and it would be parsed using record_in.  The 9.1 changes made
PL/Python only expect dictionaries, tuples, or objects supporting
getattr as output of composite functions, resulting in a regression
and a confusing error message, as the strings were interpreted as
sequences and the code for transforming lists to database tuples was
used.  Fix this by treating strings separately as before, before
checking for the other types.

The reason why it's important to support string to database tuple
conversion is that trigger functions on tables with composite columns
get the composite row passed in as a string (from record_out).
Without supporting converting this back using record_in, this makes it
impossible to implement pass-through behavior for these columns, as
PL/Python no longer accepts strings for composite values.

A better solution would be to fix the code that transforms composite
inputs into Python objects to produce dictionaries that would then be
correctly interpreted by the Python->PostgreSQL counterpart code.  But
that would be too invasive to backpatch to 9.1, and it is too late in
the 9.2 cycle to attempt it.  It should be revisited in the future,
though.

Reported as bug #6559 by Kirill Simonov.

Jan Urbański
2012-04-26 21:03:48 +03:00
Peter Eisentraut
0f48e06751 PL/Python: Improve documentation of nrows() method
Clarify that nrows() is the number of rows processed, versus the
number of rows returned, which can be obtained using len.  Also add
tests about that.
2012-04-16 11:30:32 +03:00
Peter Eisentraut
c03523ed3f PL/Python: Fix crash when colnames() etc. called without result set
The result object methods colnames() etc. would crash when called
after a command that did not produce a result set.  Now they throw an
exception.

discovery and initial patch by Jean-Baptiste Quenot
2012-04-15 20:23:08 +03:00
Peter Eisentraut
ee7fa66b19 PL/Python: Add result metadata functions
Add result object functions .colnames, .coltypes, .coltypmods to
obtain information about the result column names and types, which was
previously not possible in the PL/Python SPI interface.

reviewed by Abhijit Menon-Sen
2012-01-30 21:38:52 +02:00
Peter Eisentraut
89e850e6fd plpython: Add SPI cursor support
Add a function plpy.cursor that is similar to plpy.execute but uses an
SPI cursor to avoid fetching the entire result set into memory.

Jan Urbański, reviewed by Steve Singer
2011-12-05 19:52:15 +02:00
Heikki Linnakangas
f21fc7f9fc Preserve SQLSTATE when an SPI error is propagated through PL/python
exception handler. This was a regression in 9.1, when the capability
to catch specific SPI errors was added, so backpatch to 9.1.

Mika Eloranta, with some editing by Jan Urbański.
2011-11-24 17:18:43 +02:00
Tom Lane
2dada0cc85 Fix two issues in plpython's handling of composite results.
Dropped columns within a composite type were not handled correctly.
Also, we did not check for whether a composite result type had changed
since we cached the information about it.

Jan Urbański, per a bug report from Jean-Baptiste Quenot
2011-08-17 17:07:16 -04:00