on UPDATE.
This get's rid of the long standing annoyance that updating a row
that has foreign keys locks all the referenced rows even if the
foreign key values do not change.
The trick is to actually do a check identical to NO ACTION after an
eventually done UPDATE in the SET DEFAULT case. Since a SET DEFAULT
operation should have moved referencing rows to a new "home", a following
NO ACTION check can only fail if the column defaults of the referencing
table resulted in the key we actually deleted. Thanks to Stephan.
Jan
(materialization into a tuple store) discussed on pgsql-hackers earlier.
I've updated the documentation and the regression tests.
Notes on the implementation:
- I needed to change the tuple store API slightly -- it assumes that it
won't be used to hold data across transaction boundaries, so the temp
files that it uses for on-disk storage are automatically reclaimed at
end-of-transaction. I added a flag to tuplestore_begin_heap() to control
this behavior. Is changing the tuple store API in this fashion OK?
- in order to store executor results in a tuple store, I added a new
CommandDest. This works well for the most part, with one exception: the
current DestFunction API doesn't provide enough information to allow the
Executor to store results into an arbitrary tuple store (where the
particular tuple store to use is chosen by the call site of
ExecutorRun). To workaround this, I've temporarily hacked up a solution
that works, but is not ideal: since the receiveTuple DestFunction is
passed the portal name, we can use that to lookup the Portal data
structure for the cursor and then use that to get at the tuple store the
Portal is using. This unnecessarily ties the Portal code with the
tupleReceiver code, but it works...
The proper fix for this is probably to change the DestFunction API --
Tom suggested passing the full QueryDesc to the receiveTuple function.
In that case, callers of ExecutorRun could "subclass" QueryDesc to add
any additional fields that their particular CommandDest needed to get
access to. This approach would work, but I'd like to think about it for
a little bit longer before deciding which route to go. In the mean time,
the code works fine, so I don't think a fix is urgent.
- (semi-related) I added a NO SCROLL keyword to DECLARE CURSOR, and
adjusted the behavior of SCROLL in accordance with the discussion on
-hackers.
- (unrelated) Cleaned up some SGML markup in sql.sgml, copy.sgml
Neil Conway
them as arrays of the internal datatype. This requires treating the
stavalues columns as 'anyarray' rather than 'text[]', which is not 100%
kosher but seems to work fine for the purposes we need for pg_statistic.
Perhaps in the future 'anyarray' will be allowed more generally.
some of the algorithms for higher functions. I see about a factor of ten
speedup on the 'numeric' regression test, but it's unlikely that that test
is representative of real-world applications.
initdb forced due to change of on-disk representation for NUMERIC.
Add ALTER SEQUENCE to modify min/max/increment/cache/cycle values
Also updated create sequence docs to mention NO MINVALUE, & NO MAXVALUE.
New Files:
doc/src/sgml/ref/alter_sequence.sgml
src/test/regress/expected/sequence.out
src/test/regress/sql/sequence.sql
ALTER SEQUENCE is NOT transactional. It behaves similarly to setval().
It matches the proposed SQL200N spec, as well as Oracle in most ways --
Oracle lacks RESTART WITH for some strange reason.
--
Rod Taylor <rbt@rbt.ca>
what is capable using integer-datatime timestamps. It does attempt
to exercise the maximum allowable timestamp range.
Also is a small error check when converting a timestamp from external
to internal format that prevents out of range timestamps from being
entered.
Files patched:
Index: src/backend/utils/adt/timestamp.c
Added range check to prevent out of range timestamps
from being used.
Index: src/test/regress/sql/horology.sql
Index: src/test/regress/expected/horology-no-DST-before-1970.out
Index: src/test/regress/expected/horology-solaris-1947.out
Limited range of timestamps being checked to
Jan 1, 4713 BC to Dec 31, 294276
In creating this patch, I have seen some definite problems with integer
timestamps and how they react when used near their limits. For example,
the following statement gives the correct result:
SELECT timestamp without time zone 'Jan 1, 4713 BC'
+ interval '109203489 days' AS "Dec 31, 294276";
However, this statement which is the logical inverse of the above
gives incorrect results:
SELECT timestamp without time zone '12/31/294276'
- timestamp without time zone 'Jan 1, 4713 BC' AS "109203489 Days";
John Cochran
division and modulo functions, to avoid problems on OS X (which fails to
trap 0 divide at all) and Windows (which traps it in some bizarre
nonstandard fashion). Standardize on 'division by zero' as the one true
spelling of this error message. Add regression tests as suggested by
Neil Conway.
RelOid_pg_class, and transaction locks XactLockTableId. RelId is renamed
to objId.
- LockObject() and UnlockObject() functions created, and their use
sprinkled throughout the code to do descent locking for domains and
types. They accept lock modes AccessShare and AccessExclusive, as we
only really need a 'read' and 'write' lock at the moment. Most locking
cases are held until the end of the transaction.
This fixes the cases Tom mentioned earlier in regards to locking with
Domains. If the patch is good, I'll work on cleaning up issues with
other database objects that have this problem (most of them).
Rod Taylor
functions which limited the maximum date for a timestamp to AD 1465001.
The new limit is AD 5874897.
The files affected are:
doc/src/sgml/datatype.sgml:
Documentation change due to patch. Included is a notice about
the reduced range when using an eight-byte integer for timestamps.
src/backend/utils/adt/datetime.c:
Replacement functions for j2date() and date2j() functions.
src/include/utils/datetime.h:
Corrected a bug with the limit on the earliest possible date,
Nov 23,-4713 has a Julian day count of -1. The earliest possible
date should be Nov 24, -4713 with a day count of 0.
src/test/regress/expected/horology-no-DST-before-1970.out:
src/test/regress/expected/horology-solaris-1947.out:
src/test/regress/expected/horology.out:
Copies of expected output for regression testing.
Note: Only horology.out has been physically tested. I do not have access
to a Solaris box and I don't know how to provoke the "pre-1970" test.
src/test/regress/sql/horology.sql:
Added some test cases to check extended range.
John Cochran
of the containing query (which really can only happen in a rule context).
Per example from Brandon Craig Rhodes. Also, make the error message
more specific for the similar case with sub-select in FROM. The revised
coding should be easier to adapt to SQL99's LATERAL(), when we get around
to supporting that.
takes two parameters, an OID x and an integer y, and returns "true" with
probability 1/y (the OID argument is ignored). This can be useful -- for
example, it can be used to select a random sampling of the rows in a
table (which is what the "random" regression test uses it for).
This patch removes that function, because it was old and messy. The old
function had the following problems:
- it was undocumented
- it was poorly named
- it was designed to workaround an optimizer bug that no longer exists
(the OID argument is to ensure that the optimizer won't optimize away
calls to the function; AFAIK marking the function as 'volatile' suffices
nowadays)
- it used a different random-number generation technique than the other
PSRNG-related functions in the backend do (it called random() like they
do, but it had its own logic for setting a set and deciding when to
reseed the RNG).
Ok, this patch removes oidrand(), oidsrand(), and userfntest(), and
improves the SGML docs a little bit (un-commenting the setseed()
documentation).
Neil Conway
On Wed, 2003-01-08 at 21:59, Christopher Kings-Lynne wrote:
> I agree. I want to remove OIDs from heaps of our tables when we go to 7.3.
> I'd rather not have to do it in the dump due to down time.
Rod Taylor <rbt@rbt.ca>
constraints appearing in outer-join qualification clauses are restricted
as to when and where they can be pushed down. Add regression test
to catch future errors in this area.
startup, not in the parser; this allows ALTER DOMAIN to work correctly
with domain constraint operations stored in rules. Rod Taylor;
code review by Tom Lane.
for type 'time without time zone', as we already did for type
'timestamp without time zone'. This patch was proposed by Tom Lockhart
on 7-Nov-02, but he never got around to applying it. Adjust regression
tests and documentation to match.
passed to join selectivity estimators. Make use of this in eqjoinsel
to derive non-bogus selectivity for IN clauses. Further tweaking of
cost estimation for IN.
initdb forced because of pg_proc.h changes.
Try to model the effect of rescanning input tuples in mergejoins;
account for JOIN_IN short-circuiting where appropriate. Also, recognize
that mergejoin and hashjoin clauses may now be more than single operator
calls, so we have to charge appropriate execution costs.
necessarily following the JOIN syntax to develop the query plan. The old
behavior is still available by setting GUC variable JOIN_COLLAPSE_LIMIT
to 1. Also create a GUC variable FROM_COLLAPSE_LIMIT to control the
similar decision about when to collapse sub-SELECT lists into their parent
lists. (This behavior existed already, but the limit was always
GEQO_THRESHOLD/2; now it's separately adjustable.)
There are two implementation techniques: the executor understands a new
JOIN_IN jointype, which emits at most one matching row per left-hand row,
or the result of the IN's sub-select can be fed through a DISTINCT filter
and then joined as an ordinary relation.
Along the way, some minor code cleanup in the optimizer; notably, break
out most of the jointree-rearrangement preprocessing in planner.c and
put it in a new file prep/prepjointree.c.
datetime token tables. Even more embarrassing, the regression tests
revealed some of the problems --- but evidently the bogus output wasn't
questioned. Add code to postmaster startup to directly check the tables
for correct ordering, in hopes of not being embarrassed like this again.
containing a volatile function), rather than only on 'Var = Var' clauses
as before. This makes it practical to do flatten_join_alias_vars at the
start of planning, which in turn eliminates a bunch of klugery inside the
planner to deal with alias vars. As a free side effect, we now detect
implied equality of non-Var expressions; for example in
SELECT ... WHERE a.x = b.y and b.y = 42
we will deduce a.x = 42 and use that as a restriction qual on a. Also,
we can remove the restriction introduced 12/5/02 to prevent pullup of
subqueries whose targetlists contain sublinks.
Still TODO: make statistical estimation routines in selfuncs.c and costsize.c
smarter about expressions that are more complex than plain Vars. The need
for this is considerably greater now that we have to be able to estimate
the suitability of merge and hash join techniques on such expressions.
match parent table. This used to work, but was broken in 7.3 by
rearrangement of code that handles targetlist sorting. Add a regression
test to catch future breakage.
patches of 9-Dec (permissions fix) and 13-Dec (performance) as well as
a partial fix for locking issues: concurrent DROP COLUMN should not
create trouble anymore. But concurrent DROP TABLE is still a risk, and
there is no protection at all against creating a column of a domain while
we are altering the domain.
columns in DefineIndex. So, ALTER TABLE ... PRIMARY KEY will now
automatically add the NOT NULL constraint. It appeared the alter_table
regression test wanted this to occur, as after the change the regression
test better matched in inline 'fails'/'succeeds' comments.
Rod Taylor
produce which output in the geometry test, even with the problem narrowed
down to only whether they print minus zero or not. Instead, use
pg_regress' locale-variant mechanism to automatically consider the test
to pass if it matches either supplied comparison file. geometry_1.out
replaces the former geometry-positive-zeros.out.
make VALUE a non-reserved word again, use less invasive method of passing
ConstraintTestValue into transformExpr, fix problems with nested constraint
testing, do correct thing with NULL result from a constraint expression,
remove memory leak. Domain checks still need much more work if we are going
to allow ALTER DOMAIN, however.
documentation and regression test mods. It seemed small and unobtrusive enough
to not require a specific proposal on the hackers list -- but if not, let me
know and I'll make a pitch. Otherwise, if there are no objections please apply.
Joe Conway