datatype by array_eq and array_cmp; use this to solve problems with memory
leaks in array indexing support. The parser's equality_oper and ordering_oper
routines also use the cache. Change the operator search algorithms to look
for appropriate btree or hash index opclasses, instead of assuming operators
named '<' or '=' have the right semantics. (ORDER BY ASC/DESC now also look
at opclasses, instead of assuming '<' and '>' are the right things.) Add
several more index opclasses so that there is no regression in functionality
for base datatypes. initdb forced due to catalog additions.
them as arrays of the internal datatype. This requires treating the
stavalues columns as 'anyarray' rather than 'text[]', which is not 100%
kosher but seems to work fine for the purposes we need for pg_statistic.
Perhaps in the future 'anyarray' will be allowed more generally.
operations: make sure we use operators that are compatible, as determined
by a mergejoin link in pg_operator. Also, add code to planner to ensure
we don't try to use hashed grouping when the grouping operators aren't
marked hashable.
that ANALYZE would not gather any stats for a CHAR(255) column. I still
think a width threshold is appropriate for the reasons mentioned in the
code, but we can loosen it at least.
array header, and to compute sizing and alignment of array elements
the same way normal tuple access operations do --- viz, using the
tupmacs.h macros att_addlength and att_align. This makes the world
safe for arrays of cstrings or intervals, and should make it much
easier to write array-type-polymorphic functions; as examples see
the cleanups of array_out and contrib/array_iterator. By Joe Conway
and Tom Lane.
value '-2' is used to indicate a variable-width type whose width is
computed as strlen(datum)+1. Everything that looks at typlen is updated
except for array support, which Joe Conway is working on; at the moment
it wouldn't work to try to create an array of cstring.
it takes could be held for quite awhile after the analyze step completes.
Rethink locking of pg_statistic in light of this fact. The original
scheme took an exclusive lock on pg_statistic, which was okay when the
lock could be expected to be released shortly, but that doesn't hold
anymore. Back off to a normal writer's lock (RowExclusiveLock). This
allows concurrent ANALYZE of nonoverlapping sets of tables, at the price
that concurrent ANALYZEs of the same table may fail with 'tuple
concurrently updated'.
hardwired lists of index names for each catalog, use the relcache's
mechanism for caching lists of OIDs of indexes of any table. This
reduces the common case of updating system catalog indexes to a single
line, makes it much easier to add a new system index (in fact, you
can now do so on-the-fly if you want to), and as a nice side benefit
improves performance a little. Per recent pghackers discussion.
code review by Tom Lane. Remaining issues: functions that take or
return tuple types are likely to break if one drops (or adds!)
a column in the table defining the type. Need to think about what
to do here.
Along the way: some code review for recent COPY changes; mark system
columns attnotnull = true where appropriate, per discussion a month ago.
attstattarget to indicate 'use the default'. The default is now a GUC
variable default_statistics_target, and so may be changed on the fly. Along
the way we gain the ability to have pg_dump dump the per-column statistics
target when it's not the default. Patch by Neil Conway, with some kibitzing
from Tom Lane.
transaction, so as to avoid returning them out of the index AM. Saves
repeated heap_fetch operations on frequently-updated rows. Also detect
queries on unique keys (equality to all columns of a unique index), and
don't bother continuing scan once we have found first match.
Killing is implemented in the btree and hash AMs, but not yet in rtree
or gist, because there isn't an equally convenient place to do it in
those AMs (the outer amgetnext routine can't do it without re-pinning
the index page).
Did some small cleanup on APIs of HeapTupleSatisfies, heap_fetch, and
index_insert to make this a little easier.
in snapshots, per my proposal of a few days ago. Also, tweak heapam.c
routines (heap_insert, heap_update, heap_delete, heap_mark4update) to
be passed the command ID to use, instead of doing GetCurrentCommandID.
For catalog updates they'll still get passed current command ID, but
for updates generated from the main executor they'll get passed the
command ID saved in the snapshot the query is using. This should fix
some corner cases associated with functions and triggers that advance
current command ID while an outer query is still in progress.
yesterday's proposal to pghackers. Also remove unnecessary parameters
to heap_beginscan, heap_rescan. I modified pg_proc.h to reflect the
new numbers of parameters for the AM interface routines, but did not
force an initdb because nothing actually looks at those fields.
qualified operator names directly, for example CREATE OPERATOR myschema.+
( ... ). To qualify an operator name in an expression you need to write
OPERATOR(myschema.+) (thanks to Peter for suggesting an escape hatch).
I also took advantage of having to reformat pg_operator to fix something
that'd been bugging me for a while: mergejoinable operators should have
explicit links to the associated cross-data-type comparison operators,
rather than hardwiring an assumption that they are named < and >.
objects to be privilege-checked. Some change in their APIs would be
necessary no matter what in the schema environment, and simply getting
rid of the name-based interface entirely seems like the best way.
o Change all current CVS messages of NOTICE to WARNING. We were going
to do this just before 7.3 beta but it has to be done now, as you will
see below.
o Change current INFO messages that should be controlled by
client_min_messages to NOTICE.
o Force remaining INFO messages, like from EXPLAIN, VACUUM VERBOSE, etc.
to always go to the client.
o Remove INFO from the client_min_messages options and add NOTICE.
Seems we do need three non-ERROR elog levels to handle the various
behaviors we need for these messages.
Regression passed.
now just below FATAL in server_min_messages. Added more text to
highlight ordering difference between it and client_min_messages.
---------------------------------------------------------------------------
REALLYFATAL => PANIC
STOP => PANIC
New INFO level the prints to client by default
New LOG level the prints to server log by default
Cause VACUUM information to print only to the client
NOTICE => INFO where purely information messages are sent
DEBUG => LOG for purely server status messages
DEBUG removed, kept as backward compatible
DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1 added
DebugLvl removed in favor of new DEBUG[1-5] symbols
New server_min_messages GUC parameter with values:
DEBUG[5-1], INFO, NOTICE, ERROR, LOG, FATAL, PANIC
New client_min_messages GUC parameter with values:
DEBUG[5-1], LOG, INFO, NOTICE, ERROR, FATAL, PANIC
Server startup now logged with LOG instead of DEBUG
Remove debug_level GUC parameter
elog() numbers now start at 10
Add test to print error message if older elog() values are passed to elog()
Bootstrap mode now has a -d that requires an argument, like postmaster
database, including system catalogs (but not the shared catalogs,
since they don't really belong to his database). This is per recent
mailing list discussion. Clean up some other code that also checks
for database ownerness by introducing a test function is_dbadmin().
(vs. at the end of a normal sort). This ensures that explicit sorts
yield the same ordering as a btree index scan. To be really sure that
that equivalence holds, we use the btree entries in pg_amop to decide
whether we are looking at a '<' or '>' operator. For a sort operator
that has no btree association, we put the nulls at the front if the
operator is named '>' ... pretty grotty, but it does the right thing in
simple ASC and DESC cases, and at least there's no possibility of getting
a different answer depending on the plan type chosen.
a separate statement (though it can still be invoked as part of VACUUM, too).
pg_statistic redesigned to be more flexible about what statistics are
stored. ANALYZE now collects a list of several of the most common values,
not just one, plus a histogram (not just the min and max values). Random
sampling is used to make the process reasonably fast even on very large
tables. The number of values and histogram bins collected is now
user-settable via an ALTER TABLE command.
There is more still to do; the new stats are not being used everywhere
they could be in the planner. But the remaining changes for this project
should be localized, and the behavior is already better than before.
A not-very-related change is that sorting now makes use of btree comparison
routines if it can find one, rather than invoking '<' twice.