entry:
----------------------------
revision 1.2
date: 2000/12/04 01:20:38; author: tgl; state: Exp; lines:
+18 -18
Eliminate some of the more blatant platform-dependencies ... it
builds here now, anyway ...
----------------------------
Which basically changes u_int*_t -> uint*_t, so now it does not
compile neither under Debian 2.2 nor under NetBSD 1.5 which
is platform independent<B8> all right. Also it replaces $KAME$
with $Id$ which is Bad Thing. PostgreSQL Id should be added as a
separate line so the file history could be seen.
So here is patch:
* changes uint*_t -> uint*. I guess that was the original
intention
* adds uint64 type to include/c.h because its needed
[somebody should check if I did it right]
* adds back KAME Id, because KAME is the master repository
* removes stupid c++ comments in pgcrypto.c
* removes <sys/types.h> from the code, its not needed
--
marko
Marko Kreen
- no more elog(STOP) in StartupXLOG();
- both checkpoint' undo & redo are used to define
oldest on-line log file.
2. Ability to pre-allocate a few log files at checkpoint time
(wal_files option). Off by default.
as both a GROUP BY item and an output expression, the top-level Group
node should just copy up the evaluated expression value from its input,
rather than re-evaluating the expression. Aside from any performance
benefit this might offer, this avoids a crash when there is a sub-SELECT
in said expression.
before calling RelationInvalidateHeapTuple(), which is bad because the
latter needs to look at the tuple data, which is in the shared disk
buffer. If another backend manages to recycle the buffer while this
is going on, we will compute the wrong hashindex for the tuple or
maybe even crash outright. Must hold buffer refcount until afterwards.
(This bug is not in 7.0.*; seems to be have introduced during WAL changes.)
and burn. Just for added luck, change reading of CONST nodes so that
we do not need to consult pg_type rows while reading them; this means
that no database access occurs during stringToNode. This requires
changing the order in which const-node fields are written, which means
an initdb is forced.
in per-entry sub-memory-context, where they were supposed to go, rather
than in CacheMemoryContext where the code was putting them. Must've
suffered a severe brain fade when I wrote this :-(
sequences. This is done by disabling multi-byte awareness when it's
not necessary. This is kind of a workaround, not a perfect solution.
However, there is no ideal way to parse broken multi-byte character
sequences. So I guess this is the best way what we could do right
now...
and revert documentation to describe the existing INHERITS clause
instead, per recent discussion in pghackers. Also fix implementation
of SQL_inheritance SET variable: it is not cool to look at this var
during the initial parsing phase, only during parse_analyze(). See
recent bug report concerning misinterpretation of date constants just
after a SET TIMEZONE command. gram.y really has to be an invariant
transformation of the query string to a raw parsetree; anything that
can vary with time must be done during parse analysis.
Previous result did not have correct month boundaries so anything near edge
cases was suspect (e.g. April was in Q1 and July, August were lumped into
Q2).
Thanks to Denis Osadchy <osadchy@turbo.nsk.su> for the report.
The leak is caused by the memory allocation in
src/interfaces/ecpg/lib/execute.c in line 669 which is never freed.
Adding a "free(array_query);" after PQexec in line 671 seems to fix the
leak.
Thorsten Knabe
starting a new hashtable search no longer clobbers any other search
active anywhere in the system. Fix RelationCacheInvalidate() so that
it will not crash or go into an infinite loop if invoked recursively,
as for example by a second SI Reset message arriving while we are still
processing a prior one.