That means you can now set your options in either or all of $PGDATA/configuration,
some postmaster option (--enable-fsync=off), or set a SET command. The list of
options is in backend/utils/misc/guc.c, documentation will be written post haste.
pg_options is gone, so is that pq_geqo config file. Also removed were backend -K,
-Q, and -T options (no longer applicable, although -d0 does the same as -Q).
Added to configure an --enable-syslog option.
changed all callers from TPRINTF to elog(DEBUG)
table for an average of NTUP_PER_BUCKET tuples/bucket, but cost_hashjoin
was assuming a target load of one tuple/bucket. This was causing a
noticeable underestimate of hashjoin costs.
from a constraint condition does not violate the constraint (cf. discussion
on pghackers 12/9/99). Implemented by adding a parameter to ExecQual,
specifying whether to return TRUE or FALSE when the qual result is
really NULL in three-valued boolean logic. Currently, ExecRelCheck is
the only caller that asks for TRUE, but if we find any other places that
have the wrong response to NULL, it'll be easy to fix them.
BufFile so that it handles multi-segment temporary files transparently.
This allows sorts and hashes to work with data exceeding 2Gig (or whatever
the local limit on file size is). Change psort.c to use relative seeks
instead of absolute seeks for backwards scanning, so that it won't fail
when the data volume exceeds 2Gig.
fixed-size hashtable. This should prevent 'hashtable out of memory' errors,
unless you really do run out of memory. Note: target size for hashtable
is now taken from -S postmaster switch, not -B, since it is local memory
in the backend rather than shared memory.
about certain to fail anytime it decided the relation to be hashed was
too big to fit in memory --- the code for 'batching' a series of hashjoins
had multiple errors. I've fixed the easier problems. A remaining big
problem is that you can get 'hashtable out of memory' if the code's
guesstimate about how much overflow space it will need turns out wrong.
That will require much more extensive revisions to fix, so I'm committing
these fixes now before I start on that problem.
hashjoin's hashFunc() so that it does the right thing with pass-by-value
data types (the old code would always return 0 for int2 or char values,
which would work but would slow things down a lot). Extend opr_sanity
regress test to catch more kinds of errors.
where you state a format and arguments. the old behavior required
each appendStringInfo to have to have a sprintf() before it if any
formatting was required.
Also shortened several instances where there were multiple appendStringInfo()
calls in a row, doing nothing more then adding one more word to the String,
instead of doing them all in one call.
ExecReScan for nodeAgg, nodeHash, nodeHashjoin, nodeNestloop and nodeResult.
Fixed ExecReScan for nodeMaterial.
Get rid of #ifdef INDEXSCAN_PATCH.
Get rid of ExecMarkPos and ExecRestrPos in nodeNestloop.
==========================================
What follows is a set of diffs that cleans up the usage of BLCKSZ.
As a side effect, the person compiling the code can change the
value of BLCKSZ _at_their_own_risk_. By that, I mean that I've
tried it here at 4096 and 16384 with no ill-effects. A value
of 4096 _shouldn't_ affect much as far as the kernel/file system
goes, but making it bigger than 8192 can have severe consequences
if you don't know what you're doing. 16394 worked for me, _BUT_
when I went to 32768 and did an initdb, the SCSI driver broke and
the partition that I was running under went to hell in a hand
basket. Had to reboot and do a good bit of fsck'ing to fix things up.
The patch can be safely applied though. Just leave BLCKSZ = 8192
and everything is as before. It basically only cleans up all of the
references to BLCKSZ in the code.
If this patch is applied, a comment in the config.h file though above
the BLCKSZ define with warning about monkeying around with it would
be a good idea.
Darren darrenk@insightdist.com
(Also cleans up some of the #includes in files referencing BLCKSZ.)
==========================================
|
|This patch fixes a backend crash that happens sometimes when you try to
|join on a field that contains NULL in some rows. Postgres tries to
|compute a hash value of the field you're joining on, but when the field
|is NULL, the pointer it thinks is pointing to the data is really just
|pointing to random memory. This forces the hash value of NULL to be 0.
|
|It seems that nothing matches NULL on joins, even other NULL's (with or
|without this patch). Is that what's supposed to happen?
|