and hash bucket-size estimation. Issue has been there awhile but is more
critical in 7.4 because it affects varchar columns. Per report from
Greg Stark.
proposal for eventually deprecating OIDs on user tables that I posted
earlier to pgsql-hackers. pg_dump now always specifies WITH OIDS or
WITHOUT OIDS when dumping a table. The documentation has been updated.
Neil Conway
to note:
1) arttype is numeric. I thought this was the best way of allowing
arbitarily large factorials, even though factorial(2^63) is a large
number. Happy to change to integers if this is overkill.
2) since we're accepting numeric arguments, the patch tests for floats.
If a numeric is passed with non-zero decimal portion, an error is raised
since (from memory) they are undefined.
Gavin Sherry
This first part of the background writer does no syncing at all.
It's only purpose is to keep the LRU heads clean so that regular
backends seldom to never have to call write().
Jan
which had been unintentionally broken by recent changes to tighten up the
DateStyle rules for all-numeric date input. Add documentation and
regression tests for this, too.
pghackers proposal of 8-Nov. All the existing cross-type comparison
operators (int2/int4/int8 and float4/float8) have appropriate support.
The original proposal of storing the right-hand-side datatype as part of
the primary key for pg_amop and pg_amproc got modified a bit in the event;
it is easier to store zero as the 'default' case and only store a nonzero
when the operator is actually cross-type. Along the way, remove the
long-since-defunct bigbox_ops operator class.
Remove the 'strategy map' code, which was a large amount of mechanism
that no longer had any use except reverse-mapping from procedure OID to
strategy number. Passing the strategy number to the index AM in the
first place is simpler and faster.
This is a preliminary step in planned support for cross-datatype index
operations. I'm committing it now since the ScanKeyEntryInitialize()
API change touches quite a lot of files, and I want to commit those
changes before the tree drifts under me.
rule split the query into one INSERT and one UPDATE where the UPDATE
then hit's the just created row without modifying the key fields again.
In this special case, the new key slipped in totally unchecked.
Jan
ACL array, and force languages to be treated as owned by the bootstrap
user ID. (pg_language should have a lanowner column, but until it does
this will have to do as a workaround.)
a single LEFT JOIN query instead of firing the check trigger for each
row individually. Stephan Szabo, with some kibitzing from Tom Lane and
Jan Wieck.
with required outer parentheses. Breakage seems to be leftover from
domain-constraint patches. This could be smarter about suppressing
extra parens, but at this stage of the release cycle I want certainty
not cuteness.
of function bodies is done at CREATE FUNCTION time. This is normally
true but can be set false to avoid problems with forward references,
wrong schema search path, etc. This is just the backend patch, still
need to adjust pg_dump to make use of it.
in the schema search path. Otherwise pg_dump doesn't correctly dump
scenarios where a custom opclass is created in 'public' and then used
by indexes in other schemas.
discussion on pgsql-hackers: in READ COMMITTED mode we just have to force
a QuerySnapshot update in the trigger, but in SERIALIZABLE mode we have
to run the scan under a current snapshot and then complain if any rows
would be updated/deleted that are not visible in the transaction snapshot.
Before patch:
test=# select pg_get_constraintdef(oid) from pg_constraint;
pg_get_constraintdef
-------------------------------------------------------------------------------------------------
CHECK (VALUE >= 0)
CHECK ((((a)::text = 'asdf'::text) OR ((a)::text = 'fdsa'::text)) OR
((a)::text = 'dfd'::text))
PRIMARY KEY (b)
FOREIGN KEY (a) REFERENCES test2(b)
UNIQUE (b)
(5 rows)
test=# select pg_get_constraintdef(oid, true) from pg_constraint;
pg_get_constraintdef
-----------------------------------------------------------------------------------
CHECK VALUE >= 0
CHECK a::text = 'asdf'::text OR a::text = 'fdsa'::text OR a::text =
'dfd'::text
PRIMARY KEY (b)
FOREIGN KEY (a) REFERENCES test2(b)
UNIQUE (b)
(5 rows)
After patch:
test=# select pg_get_constraintdef(oid) from pg_constraint;
pg_get_constraintdef
-------------------------------------------------------------------------------------------------
CHECK (VALUE >= 0)
CHECK ((((a)::text = 'asdf'::text) OR ((a)::text = 'fdsa'::text)) OR
((a)::text = 'dfd'::text))
PRIMARY KEY (b)
FOREIGN KEY (a) REFERENCES test2(b)
UNIQUE (b)
(5 rows)
test=# select pg_get_constraintdef(oid, true) from pg_constraint;
pg_get_constraintdef
-----------------------------------------------------------------------------------
CHECK (VALUE >= 0)
` CHECK (a::text = 'asdf'::text OR a::text = 'fdsa'::text OR a::text =
'dfd'::text)
PRIMARY KEY (b)
FOREIGN KEY (a) REFERENCES test2(b)
UNIQUE (b)
(5 rows)
It's important that those brackets are there to (a) match all other
constraints and (b) so that people can just copy and paste them and it
will work as SQL.
Christopher Kings-Lynne
in the RI triggers for ON DELETE/UPDATE SET DEFAULT. The code depended
way too much on knowledge of plan structure, and yet still would fail
if the generated query got rewritten by rules.
every string, especially if some of the output should be fixed-format
machine-readable. This needs to be more carefully sorted out. Also, make
the help message generated by --help-config -h be more similar in style to
the others.
to allow es_snapshot to be set to SnapshotNow rather than a query snapshot.
This solves a bug reported by Wade Klaver, wherein triggers fired as a
result of RI cascade updates could misbehave.
now able to cope with assigning new relfilenode values to nailed-in-cache
indexes, so they can be reindexed using the fully crash-safe method. This
leaves only shared system indexes as special cases. Remove the 'index
deactivation' code, since it provides no useful protection in the shared-
index case. Require reindexing of shared indexes to be done in standalone
mode, but remove other restrictions on REINDEX. -P (IgnoreSystemIndexes)
now prevents using indexes for lookups, but does not disable index updates.
It is therefore safe to allow from PGOPTIONS. Upshot: reindexing system catalogs
can be done without a standalone backend for all cases except
shared catalogs.
difference between INSERT_IN_PROGRESS and DELETE_IN_PROGRESS for
tuples inserted and then deleted by a concurrent transaction.
Example of bug:
regression=# create table foo (f1 int);
CREATE TABLE
regression=# begin;
BEGIN
regression=# insert into foo values(1);
INSERT 195531 1
regression=# delete from foo;
DELETE 1
regression=# insert into foo values(1);
INSERT 195532 1
regression=# create unique index fooi on foo(f1);
ERROR: could not create unique index
DETAIL: Table contains duplicated values.
recent gripe, I discovered not one but two undocumented, undesirable
behaviors of glibc's mktime. So, stop using it entirely, and always
rely on inversion of localtime() to determine the local time zone.
It's not even very much slower, as it turns out that mktime (at least
in the glibc implementation) also does repeated reverse-conversions.
sequence every time it's called is bogus --- it interferes with user
control over the seed, and actually decreases randomness overall
(because a seed based on time(NULL) is pretty predictable). If you really
want a reproducible result from geqo, do 'set seed = 0' before planning
a query.