gotten rather thoroughly whacked out by careless recent changes: the
intended ratio between the two was off by a lot, and the minimum number
of shared buffers tried had increased by a lot. Problem exposed by
failures on buildfarm members with smaller SHMMAX values.
repeatedly. Now that we don't have to worry about memory leaks from
glibc's qsort, we can safely put CHECK_FOR_INTERRUPTS into the tuplesort
comparators, as was requested a couple months ago. Also, get rid of
non-reentrancy and an extra level of function call in tuplesort.c by
providing a variant qsort_arg() API that passes an extra void * argument
through to the comparison routine. (We might want to use that in other
places too, I didn't look yet.)
1) Make vcbuild actually build the pgevent dll.
2) Change the pgevent DLL file so it doens't specify ordinal for the
functions. You're not supposed to do that. You're actually supposed to
declare them as PRIVATE as well, but mingw doesn't support that. VC++
will throw a warning and not an error though, so we can live with it.
Magnus Hagander
postgresql.conf.
- shared_buffers = 32000kB => 32MB
- temp_buffers = 8000kB => 8MB
- wal_buffers = 8 => 64kB
The code of initdb was a bit modified to write MB-unit values.
Values greater than 8000kB are rounded out to MB.
GUC_UNIT_XBLOCKS is added for wal_buffers. It is like GUC_UNIT_BLOCKS,
but uses XLOG_BLCKSZ instead of BLCKSZ.
Also, I cleaned up the test of GUC_UNIT_* flags in preparation to
add more unit flags in less bits.
ITAGAKI Takahiro
which turns out to be a dominant part of the runtime in scenarios
involving lots of parse-time warnings (such as Stephen Frost's example
of an INSERT with a lot of backslash-containing strings). There's not
a whole lot we can do about the character-at-a-time scanning, but we
can at least avoid traversing the query twice.
severity. This is to ensure the user can cancel a query that's spitting
out lots of notice/warning messages, even if they're coming from a loop
that doesn't otherwise contain a CHECK_FOR_INTERRUPTS. Per gripe from
Stephen Frost.
present; intervening positions are filled with nulls. This behavior
is required by SQL99 but was not implementable before 8.2 due to lack
of support for nulls in arrays. I have only made it work for the
one-dimensional case, which is all that SQL99 requires. It seems quite
complex to get it right in higher dimensions, and since we never allowed
extension at all in higher dimensions, I think that must count as a
future feature addition not a bug fix.
the SQL spec, viz IS NULL is true if all the row's fields are null, IS NOT
NULL is true if all the row's fields are not null. The former coding got
this right for a limited number of cases with IS NULL (ie, those where it
could disassemble a ROW constructor at parse time), but was entirely wrong
for IS NOT NULL. Per report from Teodor.
I desisted from changing the behavior for arrays, since on closer inspection
it's not clear that there's any support for that in the SQL spec. This
probably needs more consideration.
to performance. (A wholesale effort to get rid of strncpy should be
undertaken sometime, but not during beta.) This commit also fixes dynahash.c
to correctly truncate overlength string keys for hashtables, so that its
callers don't have to anymore.
wrong answer, as has been seen to occur with a buggy Linux kernel. Not
really our bug, but it's a simple test in a seldom-used control path,
so might as well have a defense.
'postmaster', so as not to depend on the existence of the postmaster
symlink. Also, implement postmaster-still-alive and postmaster-kill
operations for Windows, per Magnus.
return true for exactly the characters treated as whitespace by their flex
scanners. Per report from Victor Snezhko and subsequent investigation.
Also fix a passel of unsafe usages of <ctype.h> functions, that is, ye olde
char-vs-unsigned-char issue. I won't miss <ctype.h> when we are finally
able to stop using it.
(possibly (un)translated) letters that are actually expected as input.
Also reject invalid responses instead of silenty taken them as "no".
with help from Bernd Helmle
even when a single relation requires more than max_fsm_pages pages. Also,
make VACUUM emit a warning in this case, since it likely means that VACUUM
FULL or other drastic corrective measure is needed. Per reports from Jeff
Frost and others of unexpected changes in the claimed max_fsm_pages need.
is a large enough histogram, it will use the number of matches in the
histogram to derive a selectivity estimate, rather than the admittedly
pretty bogus heuristics involving examining the pattern contents. I set
'large enough' at 100, but perhaps we should change that later. Also
apply the same technique in contrib/ltree's <@ and @> estimator. Per
discussion with Stefan Kaltenbrunner and Matteo Beccati.
tables in the query compete for cache space, not just the one we are
currently costing an indexscan for. This seems more realistic, and it
definitely will help in examples recently exhibited by Stefan
Kaltenbrunner. To get the total size of all the tables involved, we must
tweak the handling of 'append relations' a bit --- formerly we looked up
information about the child tables on-the-fly during set_append_rel_pathlist,
but it needs to be done before we start doing any cost estimation, so
push it into the add_base_rels_to_query scan.