SI invalidation events, rather than indirectly through the relcache.
In the previous coding, we had to flush a composite-type typcache entry
whenever we discarded the corresponding relcache entry. This caused problems
at least when testing with RELCACHE_FORCE_RELEASE, as shown in recent report
from Jeff Davis, and might result in real-world problems given the kind of
unexpected relcache flush that that test mechanism is intended to model.
The new coding decouples relcache and typcache management, which is a good
thing anyway from a structural perspective. The cost is that we have to
search the typcache linearly to find entries that need to be flushed. There
are a couple of ways we could avoid that, but at the moment it's not clear
it's worth any extra trouble, because the typcache contains very few entries
in typical operation.
Back-patch to 8.2, the same as some other recent fixes in this general area.
The patch could be carried back to 8.0 with some additional work, but given
that it's only hypothetical whether we're fixing any problem observable in
the field, it doesn't seem worth the work now.
This patch changes _bt_split() and _bt_pagedel() to throw a plain ERROR,
rather than PANIC, for several cases that are reported from the field
from time to time:
* right sibling's left-link doesn't match;
* PageAddItem failure during _bt_split();
* parent page's next child isn't right sibling during _bt_pagedel().
In addition the error messages for these cases have been made a bit
more verbose, with additional values included.
The original motivation for PANIC here was to capture core dumps for
subsequent analysis. But with so many users whose platforms don't capture
core dumps by default, or who are unprepared to analyze them anyway, it's hard
to justify a forced database restart when we can fairly easily detect the
problems before we've reached the critical sections where PANIC would be
necessary. It is not currently known whether the reports of these messages
indicate well-hidden bugs in Postgres, or are a result of storage-level
malfeasance; the latter possibility suggests that we ought to try to be more
robust even if there is a bug here that's ultimately found.
Backpatch to 8.2. The code before that is sufficiently different that
it doesn't seem worth the trouble to back-port further.
returning "record" actually do have the same rowtype. This is needed because
the parser can't realistically enforce that they will all have the same typmod,
as seen in a recent example from David Wheeler.
Back-patch to 8.0, which is as far back as we have the notion of RECORD
subtypes being distinguished by typmod. Wheeler's example depends on
8.4-and-up features, but I suspect there may be ways to provoke similar
failures before 8.4.
_outPlannedStmt is only debug support, so the omission there was not very
serious, but the omission in _copyPlannedStmt is a real bug. The consequence
would be that a copied plan tree would never be marked as a transient plan,
so that we would forget we ought to replan it after some not-yet-ready index
becomes ready for use. This might explain some past complaints about indexes
created with CREATE INDEX CONCURRENTLY not being used right away. Problem
spotted by Yeb Havinga.
Back-patch to 8.3, where the field was added.
socket lockfile) when writing them. The lack of an fsync here may well
explain two different reports we've seen of corrupted lockfile contents,
which doesn't particularly bother the running server but can prevent a
new server from starting if the old one crashes. Per suggestion from
Alvaro.
Back-patch to all supported versions.
path that specifies useTemp, but there is no active temp schema in the
current session. (This can happen if the path was saved during a transaction
that created a temp schema and was later rolled back.) For existing callers
it's sufficient to ignore the useTemp flag in this case, though we might
later want to offer an option to create a fresh temp schema. So far as I can
tell this is just an Assert failure: in a non-assert build, the code would
push a zero onto the new search path, which is useless but not very harmful.
Per bug report from Heikki.
Back-patch to 8.3; prior versions don't have this code.
tsqueries. CompareTSQ has to have a guard for the case rather than blindly
applying QTNodeCompare to random data past the end of the datums. Also,
change QTNodeCompare to be a little less trusting: use an actual test rather
than just Assert'ing that the input is sane. Problem encountered while
investigating another issue (I saw a core dump in autoanalyze on a table
containing multiple empty tsquery values).
Back-patch to all branches with tsquery support.
In HEAD, also fix some bizarre (though not outright wrong) coding in
tsq_mcontains().
look through join alias Vars to avoid breaking join queries, and
move the test to someplace where it will catch more possible ways
of calling a function. We still ought to throw away the whole thing
in favor of a data-type-based solution, but that's not feasible in
the back branches.
Completion of back-port of my patch of yesterday.
assuming that a local char[] array would be aligned on at least a word
boundary. There are architectures on which that is pretty much guaranteed to
NOT be the case ... and those arches also don't like non-aligned memory
accesses, meaning that log_newpage() would crash if it ever got invoked.
Even on Intel-ish machines there's a potential for a large performance penalty
from doing I/O to an inadequately aligned buffer. So palloc it instead.
Backpatch to 8.0 --- 7.4 doesn't have this code.
If a zeroed page is present in the heap, ALTER TABLE .. SET TABLESPACE will
set the LSN and TLI while copying it, which is wrong, and heap_xlog_newpage()
will do the same thing during replay, so the corruption propagates to any
standby. Note, however, that the bug can't be demonstrated unless archiving
is enabled, since in that case we skip WAL logging altogether, and the LSN/TLI
are not set.
Back-patch to 8.0; prior releases do not have tablespaces.
Analysis and patch by Jeff Davis. Adjustments for back-branches and minor
wordsmithing by me.
a pass-by-reference datatype with a nontrivial projection step.
We were using the same memory context for the projection operation as for
the temporary context used by the hashtable routines in execGrouping.c.
However, the hashtable routines feel free to reset their temp context at
any time, which'd lead to destroying input data that was still needed.
Report and diagnosis by Tao Ma.
Back-patch to 8.1, where the problem was introduced by the changes that
allowed us to work with "virtual" tuples instead of materializing intermediate
tuple values everywhere. The earlier code looks quite similar, but it doesn't
suffer the problem because the data gets copied into another context as a
result of having to materialize ExecProject's output tuple.
loop from being dropped, I missed subtransaction cleanup. Pinned portals
must be dropped at subtransaction cleanup just as they are at main
transaction cleanup.
Per bug #5556 by Robert Walker. Backpatch to 8.0, 7.4 didn't have
subtransactions.
use the actual element type of the array it's disassembling, rather than
trusting the type OID passed in by its caller. This is needed because
sometimes the planner passes in a type OID that's only binary-compatible
with the target column's type, rather than being an exact match. Per an
example from Bernd Helmle.
Possibly we should refactor get_attstatsslot/free_attstatsslot to not expect
the caller to supply type ID data at all, but for now I'll just do the
minimum-change fix.
Back-patch to 7.4. Bernd's test case only crashes back to 8.0, but since
these subroutines are the same in 7.4, I suspect there may be variant
cases that would crash 7.4 as well.
sub-select contains a join alias reference that expands into an expression
containing another sub-select. Per yesterday's report from Merlin Moncure
and subsequent off-list investigation.
Back-patch to 7.4. Older versions didn't attempt to flatten sub-selects in
ways that would trigger this problem.
being used in a PL/pgSQL FOR loop is closed was inadequate, as Tom Lane
pointed out. The bug affects FOR statement variants too, because you can
close an implicitly created cursor too by guessing the "<unnamed portal X>"
name created for it.
To fix that, "pin" the portal to prevent it from being dropped while it's
being used in a PL/pgSQL FOR loop. Backpatch all the way to 7.4 which is
the oldest supported version.
Backpatch to 8.3, which is as far back as we have opfamilies.
The opclass portion could probably be backpatched to 8.2, when
REASSIGN OWNED was added, but for now I have not done that.
Asko Tiidumaa, with minor adjustments by me.
but we have nevertheless exposed them to users via pg_get_expr(). It would
be too much maintenance effort to rigorously check the input, so put a hack
in place instead to restrict pg_get_expr() so that the argument must come
from one of the system catalog columns known to contain valid expressions.
Per report from Rushabh Lathia. Backpatch to 7.4 which is the oldest
supported version at the moment.
the current one. Not doing this would leave the walwriter with a handle to a
deleted file if there was nothing for it to do for a long period of time,
preventing the file from being completely removed.
Reported by Tollef Fog Heen, and thanks to Heikki for some hand-holding with
the patch.
for sure ;-)). It now also optimizes more cases, such as %_%_. Improve
comments too. Per bug #5478.
In passing, also rename the TCHAR macro to GETCHAR, because pgindent is
messing with the formatting of the former (apparently it now thinks TCHAR
is a typedef name).
Back-patch to 8.3, where the bug was introduced.
is treated like end-of-input, if nulls sort last in that column and we are not
doing outer-join filling for that input. In such a case, the tuple cannot
join to anything from the other input (because we assume mergejoinable
operators are strict), and neither can any tuple following it in the sort
order. If we're not interested in doing outer-join filling we can just
pretend the tuple and its successors aren't there at all. This can save a
great deal of time in situations where there are many nulls in the join
column, as in a recent example from Scott Marlowe. Also, since the planner
tends to not count nulls in its mergejoin scan selectivity estimates, this
is an important fix to make the runtime behavior more like the estimate.
I regard this as an omission in the patch I wrote years ago to teach mergejoin
that tuples containing nulls aren't joinable, so I'm back-patching it. But
only to 8.3 --- in older versions, we didn't have a solid notion of whether
nulls sort high or low, so attempting to apply this optimization could break
things.
This saves cycles in get_ps_display() on many popular platforms, and more
importantly ensures that get_ps_display() will correctly return an empty
string if init_ps_display() hasn't been called yet. Per trouble report
from Ray Stell, in which log_line_prefix %i produced junk early in backend
startup.
Back-patch to 8.0. 7.4 doesn't have %i and its version of get_ps_display()
makes no pretense of avoiding pad junk anyhow.
archive_command) as soon as possible, namely just before issuing a new call
of archive_command, even when there is a backlog of files to be archived.
The original coding would only absorb new settings after clearing the backlog
and returning to the outer loop. Per discussion.
Back-patch to 8.3. The logic in prior versions is a bit different and it
doesn't seem worth taking any risks of breaking it.
Now validators work properly even when the settings contain
parameters that affect behavior of the function, like search_path.
Reported by Erwin Brandstetter.
Depending on which spec you read, field widths and precisions in %s may be
counted either in bytes or characters. Our code was assuming bytes, which
is wrong at least for glibc's implementation, and in any case libc might
have a different idea of the prevailing encoding than we do. Hence, for
portable results we must avoid using anything more complex than just "%s"
unless the string to be printed is known to be all-ASCII.
This patch fixes the cases I could find, including the psql formatting
failure reported by Hernan Gonzalez. In HEAD only, I also added comments
to some places where it appears safe to continue using "%.*s".
returns EINVAL for an existing shared memory segment. Although it's not
terribly sensible, that behavior does meet the POSIX spec because EINVAL
is the appropriate error code when the existing segment is smaller than the
requested size, and the spec explicitly disclaims any particular ordering of
error checks. Moreover, it does in fact happen on OS X and probably other
BSD-derived kernels. (We were able to talk NetBSD into changing their code,
but purging that behavior from the wild completely seems unlikely to happen.)
We need to distinguish collision with a pre-existing segment from invalid size
request in order to behave sensibly, so it's worth some extra code here to get
it right. Per report from Gavin Kistner and subsequent investigation.
Back-patch to all supported versions, since any of them could get used
with a kernel having the debatable behavior.
reload and rotation signals, and a helper thread reads messages from the
pipe and writes them to the log file. However, server code isn't generally
thread-safe, so if both try to do e.g palloc()/pfree() at the same time,
bad things will happen. To fix that, use a critical section (which is like
a mutex) to enforce that only one the threads are active at a time.
relcache reload works. In the patched code, a relcache entry in process of
being rebuilt doesn't get unhooked from the relcache hash table; which means
that if a cache flush occurs due to sinval queue overrun while we're
rebuilding it, the entry could get blown away by RelationCacheInvalidate,
resulting in crash or misbehavior. Fix by ensuring that an entry being
rebuilt has positive refcount, so it won't be seen as a target for removal
if a cache flush occurs. (This will mean that the entry gets rebuilt twice
in such a scenario, but that's okay.) It appears that the problem can only
arise within a transaction that has previously reassigned the relfilenode of
a pre-existing table, via TRUNCATE or a similar operation. Per bug #5412
from Rusty Conover.
Back-patch to 8.2, same as the patch that introduced the problem.
I think that the failure can't actually occur in 8.2, since it lacks the
rd_newRelfilenodeSubid optimization, but let's make it work like the later
branches anyway.
Patch by Heikki, slightly editorialized on by me.
Windows, thanks to a feature in CRT called Parameter Validation.
Backpatch to 8.2, which is the oldest version supported on Windows. In
8.2 and 8.3 also backpatch the earlier change to use DEVNULL instead of
NULL_DEV #define for a /dev/null-like device. NULL_DEV was hard-coded to
"/dev/null" regardless of platform, which didn't work on Windows, while
DEVNULL works on all platforms. Restarting syslogger didn't work on
Windows on versions 8.3 and below because of that.
by a superuser -- "ALTER USER f RESET setting" already disallows removing such a
setting.
Apply the same treatment to ALTER DATABASE d RESET ALL when run by a database
owner that's not superuser.
so that we won't try to attach any context printouts to messages that get
emitted while exiting. Per report from Dennis Koegel, the context functions
won't necessarily work after we've started shutting down the backend, and it
seems possible that debug_query_string could be pointing at freed storage
as well. The context information doesn't seem particularly relevant to
such messages anyway, so there's little lost by suppressing it.
Back-patch to all supported branches. I can only demonstrate a crash with
log_disconnections messages back to 8.1, but the risk seems real in 8.0 and
before anyway.
unless (1) the @ isn't quoted and (2) the filename isn't empty. This guards
against unexpectedly treating usernames or other strings in "flat files"
as inclusion requests, as seen in a recent trouble report from Ed L.
The empty-filename case would be guaranteed to misbehave anyway, because our
subsequent path-munging behavior results in trying to read the directory
containing the current input file.
I think this might finally explain the report at
http://archives.postgresql.org/pgsql-bugs/2004-05/msg00132.php
of a crash after printing "authentication file token too long, skipping",
since I was able to duplicate that message (though not a crash) on a
platform where stdio doesn't refuse to read directories. We never got
far in investigating that problem, but now I'm suspicious that the trigger
condition was an @ in the flat password file.
Back-patch to all active branches since the problem can be demonstrated in all
branches except HEAD. The test case, creating a user named "@", doesn't cause
a problem in HEAD since we got rid of the flat password file. Nonetheless it
seems like a good idea to not consider quoted @ as a file inclusion spec,
so I changed HEAD too.
set ferror() but never set feof(). This is known to be the case for recent
glibc when trying to read a directory as a file, and might be true for other
platforms/cases too. Per report from Ed L. (There is more that we ought to
do about his report, but this is one easily identifiable issue.)
too, instead of duplicating the functionality (badly).
I renamed xml_init to pg_xml_init, because the former seemed just a bit too
generic to be safe as a global symbol. I considered likewise renaming
xml_ereport to pg_xml_ereport, but felt that the reference to ereport probably
made it sufficiently PG-centric already.
I was afraid to do this when these changes were first made, but now that
8.4 has seen some field use it should be all right to back-patch. These
changes are really quite necessary in order to give xml.c any hope of
co-existing with loadable modules that also wish to use libxml2.
We had originally made the stronger assumption that NOT A refutes any B
if B implies A, but this fails in three-valued logic, because we need to
prove B is false not just that it's not true. However the logic does
go through if B is equal to A.
Recognizing this limited case is enough to handle examples that arise when
we have simplified "bool_var = true" or "bool_var = false" to just "bool_var"
or "NOT bool_var". If we had not done that simplification then the
btree-operator proof logic would have been able to prove that the expressions
were contradictory, but only for identical expressions being compared to the
constants; so handling identical A and B covers all the same cases.
The motivation for doing this is to avoid unexpected asymmetrical behavior
when a partitioned table uses a boolean partitioning column, as in today's
gripe from Dominik Sander.
Back-patch to 8.2, which is as far back as predicate_refuted_by attempts to
do anything at all with NOTs.
how often we do SSL session key renegotiation. Can be set to
0 to disable renegotiation completely, which is required if
a broken SSL library is used (broken patches to CVE-2009-3555
a known cause) or when using a client library that can't do
renegotiation.
segment of XLOG_BACKUP_END record even if the the record is placed
at a segment boundary. Furthermore the previous implementation could
return nonexistent segment file name when the boundary is in segments
that has "FE" suffix; We never use segments with "FF" suffix.
Backpatch to 8.0, where hot backup was introduced.
Reported by Fujii Masao.
being assigned to, in case the expression to be assigned is a FieldStore that
would need to modify that value. The need for this was foreseen some time
ago, but not implemented then because we did not have arrays of composites.
Now we do, but the point evidently got overlooked in that patch. Net result
is that updating a field of an array element doesn't work right, as
illustrated if you try the new regression test on an unpatched backend.
Noted while experimenting with EXPLAIN VERBOSE, which has also got some issues
in this area.
Backpatch to 8.3, where arrays of composites were introduced.
matching before recursing instead of after. The DFA match eliminates
unworkable midpoint choices a lot faster than the recursive check, in most
cases, so doing it first can speed things up; particularly in pathological
cases such as recently exhibited by Michael Glaesemann.
In addition, apply some cosmetic changes that were applied upstream (in the
Tcl project) at the same time, in order to sync with upstream version 1.15
of regexec.c.
Upstream apparently intends to backpatch this, so I will too. The
pathological behavior could be unpleasant if encountered in the field,
which seems to justify any risk of introducing new bugs.
Tom Lane, reviewed by Donal K. Fellows of Tcl project
There was a race condition where the receiving pipe could be closed by the
child thread if the main thread was pre-empted before it got a chance to
create a new one, and the dispatch thread ran to completion during that time.
One symptom of this is that rows in pg_listener could be dropped under
heavy load.
Analysis and original patch by Radu Ilie, with some small
modifications by Magnus Hagander.
Since all current and foreseeable future command tags will be pure ASCII,
there is no need to do conversion on them. This saves a few cycles and also
avoids polluting otherwise-pristine subtransaction memory contexts, which
is the cause of the backend memory leak exhibited in bug #5302. (Someday
we'll probably want to have a better method of determining whether
subtransaction contexts need to be kept around, but today is not that day.)
Backpatch to 8.0. The cycle-shaving aspect of this would work in 7.4
too, but without subtransactions the memory-leak aspect doesn't apply,
so it doesn't seem worth touching 7.4.
AbortTransaction or AbortSubTransaction, when trying to clean up after an
error that prevented (sub)transaction start from completing:
* access to TopTransactionResourceOwner that might not exist
* assert failure in AtEOXact_GUC, if AtStart_GUC not called yet
* assert failure or core dump in AfterTriggerEndSubXact, if
AfterTriggerBeginSubXact not called yet
Per testing by injecting elog(ERROR) at successive steps in StartTransaction
and StartSubTransaction. It's not clear whether all of these cases could
really occur in the field, but at least one of them is easily exposed by
simple stress testing, as per my accidental discovery yesterday.
the various disk-size-reporting functions will respond to query cancel
reasonably promptly even in very large databases. Per report from
Kevin Grittner.
after it's released its reference count for the cached plan. There are
code paths that might try to examine the plan list before noticing that
the portal is already in aborted state. Report and diagnosis by Tatsuo
Ishii, though this isn't exactly his proposed patch.