was large enough to be batched and the tuples fell into a batch where
there were no inner tuples at all. Thanks to Xiaoyu Wang for finding a
test case that exposed this long-standing bug.
for transaction commits that occurred just before the checkpoint. This is
an EXTREMELY serious bug --- kudos to Satoshi Okada for creating a
reproducible test case to prove its existence.
complete ExtendCLOG() before advancing nextXid, so that if that routine
fails, the next incoming transaction will try it again. Per trouble
report from Christopher Kings-Lynne.
incorrect permissions checking, but in fact disabled most all permissions
checks for view updates. This corrects problems reported by Sergey
Yatskevich among others, at the cost of re-introducing the problem
previously reported by Tim Burgess. However, since we'd lived with that
problem for quite awhile without knowing it, we can live with it awhile
longer until a proper fix can be made in 7.5.
This should prevent some obscure cases of 'variable not in subplan target
lists', although actual failures have only been reported against 7.4 in
which the bug is much easier to trigger.
rule split the query into one INSERT and one UPDATE where the UPDATE
then hit's the just created row without modifying the key fields again.
In this special case, the new key slipped in totally unchecked.
Jan
subquery that didn't previously have one. We have traditionally made
the caller of ResolveNew responsible for updating the hasSubLinks flag
of the outermost query, but this fails to account for hasSubLinks in
subqueries. Fix ResolveNew to handle this. We might later want to
change the calling convention of ResolveNew so that it can fix the
outer query too, simplifying callers. But I went with the localized
fix for now. Per bug report from J Smith, 20-Oct-03.
in the schema search path. Otherwise pg_dump doesn't correctly dump
scenarios where a custom opclass is created in 'public' and then used
by indexes in other schemas.
in HAVE_INT64_TIMESTAMP cases, including two potential stack smashes
when more than six fractional digits were supplied. Per bug report
from Philipp Reisner.
and send() very well at all; and in any case we can't use retval==0
for EOF due to race conditions. Make the same fixes in the backend as
are required in libpq.
bottom. Otherwise we fail to moveright when the root page was split while
we were "in flight" to it. This is not a significant problem when the root
is above the leaf level, but if the root was also a leaf (ie, a single-page
index just got split) we may return the wrong leaf page to the caller,
resulting in failure to find a key that is in fact present. Bug has existed
at least since 7.1, probably forever.
database, emit a WARNING and do nothing, rather than raising ERROR.
Per recent discussion in which we concluded this is the best way to deal
with database dumps that are reloaded into a database of a new name.
fixed incorrect initial setting of StartUpID. The logic in XLogWrite()
expects that Write->curridx is advanced to the next page as soon as
LogwrtResult points to the end of the current page, but StartupXLOG()
failed to make that happen when the old WAL ended exactly on a page
boundary. Per trouble report from Hannu Krosing.
This is no longer necessary or appropriate since we don't use zero typeid
as a wildcard anymore, and it fixes a nasty performance problem with
functions with many parameters. Per recent example from Reuven Lerner.
catalog lookups when not in a transaction. This prevents bizarre
failures if someone tries to set a value for session_authorization in
postgresql.conf. Per report from Fernando Nasser.
example from Rao Kumar. This is a very corner corner-case, requiring
a minimum of three closely-spaced database crashes and an unlucky
positioning of the second recovery's checkpoint record before you'd notice
any problem. But the consequences are dire enough that it's a must-fix.
dead xlog segments are not considered part of a critical section. It is
not necessary to force a database-wide panic if we get a failure in these
operations. Per recent trouble reports.
Per recent discussion on pgsql-general, this is appropriate for spec
compliance, and has the nice side-effect of easing porting from old
pg_dump files that exhibit the 59.999=>60.000 roundoff problem.
implementation limits, do not issue an ERROR; instead issue a NOTICE and use
the max supported value. Per pgsql-general discussion of 28-Apr, this is
needed to allow easy porting from pre-7.3 releases where the limits were
higher.
Unrelated change in same area: accept GLOBAL TEMP/TEMPORARY as a synonym
for TEMPORARY, as per pgsql-hackers discussion of 15-Apr. We previously
rejected it, but that was based on a misreading of the spec --- SQL92's
GLOBAL temp tables are really closer to what we have than their LOCAL ones.