Attached is a patch which patches cleanly against the Sunday afternoon
snapshot. It modifies pg_dump to dump COMMENT ON statements for
user-definable descriptions. In addition, it also modifies comment.c so
that the operator behavior is as Peter E. would like: a comment on an
operator is applied to the underlying function.
Thanks,
Mike Mascari
read is reused for successive attributes, instead of being deleted and
recreated from scratch for each value read in. This reduces palloc/pfree
overhead a lot. COPY IN still seems to be noticeably slower than it was
in 6.5 --- we need to figure out why. This change takes care of the only
major performance loss I can see in copy.c itself, so the performance
problem is at a lower level somewhere.
functions, which would lead to trouble with datatypes that paid attention
to the typelem or typmod parameters to these functions. In particular,
incorrect code in pg_aggregate.c explains the platform-specific failures
that have been reported in NUMERIC avg().
* Let unprivileged users change their own passwords.
* The password is now an Sconst in the parser, which better reflects its text datatype and also
forces users to quote them.
* If your password is NULL you won't be written to the password file, meaning you can't connect
until you have a password set up (if you use password authentication).
* When you drop a user that owns a database you get an error. The database is not gone.
errors. VACUUM normally compacts the table back-to-front, and stops
as soon as it gets to a page that it has moved some tuples onto.
(This logic doesn't make for a complete packing of the table, but it
should be pretty close.) But the way it was checking whether it had
got to a page with some moved-in tuples was to look at whether the
current page was the same as the last page of the list of pages that
have enough free space to be move-in targets. And there was other
code that would remove pages from that list once they got full.
There was a kluge that prevented the last list entry from being
removed, but it didn't get the job done. Fixed by keeping a separate
variable that contains the largest block number into which a tuple
has been moved. There's no longer any need to protect the last element
of the fraged_pages list.
Also, fix NOTICE messages to describe elapsed user/system CPU time
correctly.
relcache entry no longer leaks a small amount of memory. index_endscan
now releases all the memory acquired by index_beginscan, so callers of it
should NOT pfree the scan descriptor anymore.
didn't have time for documentation yet, but I'll write some. There are
still some things to work out what happens when you alter or drop users,
but the group stuff in and by itself is done.
--
Peter Eisentraut Sernanders väg 10:115
triggered
> function now returns the right datatype.
Oops, I got crossed up with Jan's improvements. Ignore this.
--
Peter Eisentraut Sernanders väg 10:115
peter_e@gmx.net 75262 Uppsala
anywhere from zero to two TODO items.
* Allow flag to control COPY input/output of NULLs
I got this:
COPY table .... [ WITH NULL AS 'string' ]
which does what you'd expect. The default is \N, otherwise you can use
empty strings, etc. On Copy In this acts like a filter: every data item
that looks like 'string' becomes a NULL. Pretty straightforward.
This also seems to be related to
* Make postgres user have a password by default
If I recall this discussion correctly, the problem was actually that the
default password for the postgres (or any) user is in fact "\N", because
of the way copy is used. With this change, the file pg_pwd is copied out
with nulls as empty strings, so if someone doesn't have a password, the
password is just '', which one would expect from a new account. I don't
think anyone really wants a hard-coded default password.
Peter Eisentraut Sernanders väg 10:115
* Document/trigger/rule so changes to pg_shadow recreate pg_pwd
I did it with a trigger and it seems to work like a charm. The function
that already updates the file for create and alter user has been made a
built-in "SQL" function and a trigger is created at initdb time.
Comments around the pg_pwd updating function seem to be worried about
this
routine being called concurrently, but I really don't see a reason to
worry about this. Verify for yourself. I guess we never had a system
trigger before, so treat this with care, and feel free to adjust the
nomenclature as well.
--
Peter Eisentraut Sernanders väg 10:115
at all, and because of shell quoting rules this can't be fixed, so I put
in error messages to that end.
Also, calling create or drop database in a transaction block is not so
good either, because the file system mysteriously refuses to roll back rm
calls on transaction aborts. :) So I put in checks to see if a transaction
is in progress and signal an error.
Also I put the whole call in a transaction of its own to be able to roll
back changes to pg_database in case the file system operations fail.
The alternative location issues I posted recently were untouched, awaiting
the outcome of that discussion. Other than that, this should be much more
fool-proof now.
The docs I cleaned up as well.
Peter Eisentraut Sernanders väg 10:115
This one should work much better than the one I sent in previously. The
functionality is the same, but the patch was missing one file resulting
in
the compilation failing. The docs also received a minor fix.
Peter Eisentraut Sernanders väg 10:115
table owner in order to vacuum a table. This is mainly to prevent
denial-of-service attacks via repeated vacuums. Allow VACUUM to gather
statistics about system relations, except for pg_statistic itself ---
not clear that it's worth the trouble to make that case work cleanly.
Cope with possible tuple size overflow in pg_statistic tuples; I'm
surprised we never realized that could happen. Hold a couple of locks
a little longer to try to prevent deadlocks between concurrent VACUUMs.
There still seem to be some problems in that last area though :-(
parallel --- and, not incidentally, removing a common reason for needing
manual cleanup by the DB admin after a crash. Remove initial global
delete of pg_statistics rows in VACUUM ANALYZE; this was not only bad
for performance of other backends that had to run without stats for a
while, but it was fundamentally broken because it was done outside any
transaction. Surprising we didn't see more consequences of that.
Detect attempt to run VACUUM inside a transaction block. Check for
query cancel request before starting vacuum of each table. Clean up
vacuum's private portal storage if vacuum is aborted.
Make all system indexes unique.
Make all cache loads use system indexes.
Rename *rel to *relid in inheritance tables.
Rename cache names to be clearer.
(whoever thought world-writable files were a good default????). Modify
the pg_pwd code so that pg_pwd is created with 600 permissions. Modify
initdb so that permissions on a pre-existing PGDATA directory are not
blindly accepted: if the dir is already there, it does chmod go-rwx
to be sure that the permissions are OK and the dir actually is owned
by postgres.