In several places PL/Python was calling PyObject_Str() and then
PyString_AsString() without checking if the former had returned
NULL to indicate an error. PyString_AsString() doesn't expect a
NULL argument, so passing one causes a segmentation fault. This
patch adds checks for NULL and raises errors via PLy_elog(), which
prints details of the underlying Python exception. The patch also
adds regression tests for these checks. All tests pass on my
Solaris 9 box running HEAD and Python 2.4.1.
their OID parameter. It was possible to crash the backend with
select array_in('{123}',0,0); because that would bypass the needed step
of initializing the workspace. These seem to be the only two places
with a problem, though (record_in and record_recv don't have the issue,
and the other array functions aren't depending on user-supplied input).
Back-patch as far as 7.4; 7.3 does not have the bug.
max_files_per_process. Going further than that is just a waste of
cycles, and it seems that current Cygwin does not cope gracefully
with deliberately running the system out of FDs. Per Andrew Dunstan.
checked that the pointer is actually word-aligned. Casting a non-aligned
pointer to int32* is technically illegal per the C spec, and some recent
versions of gcc actually generate bad code for the memset() when given
such a pointer. Per report from Andrew Morrow.
port number, and use a default value for it that is dependent on the
configuration-time DEF_PGPORT. Should make the world safe for running
parallel 'make check' in different branches. Back-patch as far as 7.4
so that this actually is useful.
(1) The code doesn't initialize `sum', so the initial "does the checksum
match?" test is wrong.
(2) The loop that is intended to check for a "null block" just checks
the first byte of the tar block 512 times, rather than each of the
512 bytes one time (!), which I'm guessing was the intent.
It was only through sheer luck that this worked in the first place.
Per Coverity static analysis performed by EnterpriseDB.
copying/converting the new value, which meant that it failed badly on
"var := var" if var is of pass-by-reference type. Fix this and a similar
hazard in exec_move_row(); not sure that the latter can manifest before
8.0, but patch it all the way back anyway. Per report from Dave Chapeskie.
not memcpy() to copy the offered key into the hash table during HASH_ENTER.
This avoids possible core dump if the passed key is located very near the
end of memory. Per report from Stefan Kaltenbrunner.
if geqo_rand() returns exactly 1.0, resulting in failure due to indexing
off the end of the pool array. Also, since this is using inexact float math,
it seems wise to guard against roundoff error producing values slightly
outside the expected range. Per report from bug@zedware.org.
to just around the bare recv() call that gets a command from the client.
The former placement in PostgresMain was unsafe because the intermediate
processing layers (especially SSL) use facilities such as malloc that are
not necessarily re-entrant. Per report from counterstorm.com.
to columns of an RTE that was a function returning RECORD with a column
definition list. Apparently no one has tried to use non-default typmod
with a function returning RECORD before.
working buffer into ParseDateTime() and reject too-long input there,
rather than checking the length of the input string before calling
ParseDateTime(). The old method was bogus because ParseDateTime() can use
a variable amount of working space, depending on the content of the
input string (e.g. how many fields need to be NUL terminated). This fixes
a minor stack overrun -- I don't _think_ it's exploitable, although I
won't claim to be an expert.
Along the way, fix a bug reported by Mark Dilger: the working buffer
allocated by interval_in() was too short, which resulted in rejecting
some perfectly valid interval input values. I added a regression test for
this fix.
if they are two-byte multibyte characters. Same thing can be happen
if octet_length(multibyte_chars) == n where n is char(n).
Long standing bug since 7.3 days. Per report and fix from Yoshiyuki Asaba.
where there was also a WHERE-clause restriction that applied to the
join. The check on restrictlist == NIL is really unnecessary anyway,
because select_mergejoin_clauses already checked for and complained
about any unmergejoinable join clauses. So just take it out.
and VACUUM: in the interval between adding a new page to the relation
and formatting it, it was possible for VACUUM to come along and decide
it should format the page too. Though not harmful in itself, this would
cause data loss if a third transaction were able to insert tuples into
the vacuumed page before the original extender got control back.
before we check commit/abort status. Formerly this was done in some paths
but not all, with the result that a transaction might be considered
committed for some purposes before it became committed for others.
Per example found by Jan Wieck.