Kerberos Parameter Examples
@@ -543,3 +554,20 @@ PSTYLE= /home/tgl/SGML/db118.d/docbook/print
+
+
diff --git a/doc/src/sgml/cvs.sgml b/doc/src/sgml/cvs.sgml
index a158a5c2c05..50f9a0ccbc3 100644
--- a/doc/src/sgml/cvs.sgml
+++ b/doc/src/sgml/cvs.sgml
@@ -1,5 +1,5 @@
@@ -88,7 +88,7 @@ $ cvs checkout -r REL6_4 tc
1.6
- then the tag TAG will reference
+ then the tag "TAG" will reference
file1-1.2, file2-1.3, etc.
@@ -606,7 +606,7 @@ $ which cvsup
who are actively maintaining the code base originally developed by
the DEC Systems Research Center.
- The PM3RPM distribution is roughly
+ The PM3RPM distribution is roughly
30MB compressed. At the time of writing, the 1.1.10-1 release
installed cleanly on RH-5.2, whereas the 1.1.11-1 release is
apparently built for another release (RH-6.0?) and does not run on RH-5.2.
diff --git a/doc/src/sgml/datatype.sgml b/doc/src/sgml/datatype.sgml
index a80674bf627..1900b513bea 100644
--- a/doc/src/sgml/datatype.sgml
+++ b/doc/src/sgml/datatype.sgml
@@ -1,5 +1,5 @@
@@ -262,9 +262,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/datatype.sgml,v 1.29 2000/04/14 15:08:56 th
- The original Postgres v4.2 code received from
- Berkeley rounded all double precision floating point results to six digits for
- output. Starting with v6.1, floating point numbers are allowed to retain
+ Floating point numbers are allowed to retain
most of the intrinsic precision of the type (typically 15 digits for doubles,
6 digits for 4-byte floats).
Other types with underlying floating point fields (e.g. geometric
@@ -277,8 +275,8 @@ $Header: /cvsroot/pgsql/doc/src/sgml/datatype.sgml,v 1.29 2000/04/14 15:08:56 th
Numeric Types
- Numeric types consist of two- and four-byte integers and four- and eight-byte
- floating point numbers.
+ Numeric types consist of two- and four-byte integers, four- and eight-byte
+ floating point numbers and fixed-precision decimals.
@@ -299,7 +297,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/datatype.sgml,v 1.29 2000/04/14 15:08:56 th
decimalvariableUser-specified precision
- no limit
+ ~8000 digitsfloat4
@@ -554,13 +552,13 @@ CREATE TABLE tablename (Date/Time Types
- PostgreSQL supports the full set of
+ Postgres supports the full set of
SQL date and time types.
- PostgreSQL Date/Time Types
+ Postgres Date/Time TypesDate/Time
@@ -576,7 +574,7 @@ CREATE TABLE tablename (timestamp
- for data containing both date and time
+ both date and time8 bytes4713 BCAD 1465001
@@ -584,7 +582,7 @@ CREATE TABLE tablename (timestamp with time zone
- date and time including time zone
+ date and time with time zone8 bytes1903 AD2037 AD
@@ -600,7 +598,7 @@ CREATE TABLE tablename (date
- for data containing only dates
+ dates only4 bytes4713 BC32767 AD
@@ -608,7 +606,7 @@ CREATE TABLE tablename (time
- for data containing only times of the day
+ times of day only4 bytes00:00:00.0023:59:59.99
@@ -616,7 +614,7 @@ CREATE TABLE tablename (time with time zone
- times of the day
+ times of day only4 bytes00:00:00.00+1223:59:59.99-12
@@ -628,13 +626,17 @@ CREATE TABLE tablename (
- To ensure compatibility to earlier versions of PostgreSQL
+ To ensure compatibility to earlier versions of Postgres
we also continue to provide datetime (equivalent to timestamp) and
- timespan (equivalent to interval). The types abstime
+ timespan (equivalent to interval),
+ however support for these is now restricted to having an
+ implicit translation to timestamp and
+ interval.
+ The types abstime
and reltime are lower precision types which are used internally.
You are discouraged from using any of these types in new
applications and are encouraged to move any old
- ones over when appropriate. Any or all of these types might disappear in a future release.
+ ones over when appropriate. Any or all of these internal types might disappear in a future release.
@@ -648,11 +650,11 @@ CREATE TABLE tablename (ISO-8601, SQL-compatible,
traditional Postgres, and others.
The ordering of month and day in date input can be ambiguous, therefore a setting
- exists to specify how it should be interpreted. The command
+ exists to specify how it should be interpreted in ambiguous cases. The command
SET DateStyle TO 'US' or SET DateStyle TO 'NonEuropean'
- specifies the variant month before day, the command
+ specifies the variant "month before day", the command
SET DateStyle TO 'European' sets the variant
- day before month. The ISO style
+ "day before month". The ISO style
is the default but this default can be changed at compile time or at run time.
@@ -672,7 +674,7 @@ CREATE TABLE tablename (date type.
- PostgreSQL Date Input
+ Postgres Date InputDate Inputs
@@ -702,10 +704,6 @@ CREATE TABLE tablename (1/18/1999
US; read as January 18 in any mode
-
- 1999.008
- Year and day of year
- 19990108ISO-8601 year, month, day
@@ -724,7 +722,7 @@ CREATE TABLE tablename (January 8, 99 BC
- Year 99 before the common era
+ Year 99 before the Common Era
@@ -733,7 +731,7 @@ CREATE TABLE tablename (
- PostgreSQL Day of Week Abbreviations
+ Postgres Day of Week AbbreviationsDay of Week Abbreviations
@@ -850,7 +848,7 @@ CREATE TABLE tablename (time inputs.
- PostgreSQL Time Input
+ Postgres Time InputTime Inputs
@@ -904,13 +902,14 @@ CREATE TABLE tablename (time with time zone
+
This type is defined by SQL92, but the definition exhibits
- fundamental deficiencies which renders the type near useless. In
+ fundamental deficiencies which renders the type nearly useless. In
most cases, a combination of date,
- time, and timestamp with time zone
+ time, and timestamp
should provide a complete range of date/time functionality
- required by an application.
+ required by any application.
@@ -919,7 +918,7 @@ CREATE TABLE tablename (
- PostgreSQL Time With Time
+ Postgres Time With Time
Zone InputTime With Time Zone Inputs
@@ -959,89 +958,97 @@ CREATE TABLE tablename (timestamp
-
- Valid input for the timestamp type consists of a concatenation
- of a date and a time, followed by an optional AD or
- BC, followed by an optional time zone. (See below.)
- Thus
-
-1999-01-08 04:05:06 -8:00
-
- is a valid timestamp value, which is ISO-compliant.
- In addition, the wide-spread format
-
-January 8 04:05:06 1999 PST
-
- is supported.
-
-
-
- PostgreSQL Time Zone Input
- Time Zone Inputs
-
-
-
- Time Zone
- Description
-
-
-
-
- PST
- Pacific Standard Time
-
-
- -8:00
- ISO-8601 offset for PST
-
-
- -800
- ISO-8601 offset for PST
-
-
- -8
- ISO-8601 offset for PST
-
-
-
-
-
+
+ Valid input for the timestamp type consists of a concatenation
+ of a date and a time, followed by an optional AD or
+ BC, followed by an optional time zone. (See below.)
+ Thus
+
+
+1999-01-08 04:05:06 -8:00
+
+
+ is a valid timestamp value, which is ISO-compliant.
+ In addition, the wide-spread format
+
+
+January 8 04:05:06 1999 PST
+
+ is supported.
+
+
+
+
+ Postgres Time Zone Input
+ Time Zone Inputs
+
+
+
+ Time Zone
+ Description
+
+
+
+
+ PST
+ Pacific Standard Time
+
+
+ -8:00
+ ISO-8601 offset for PST
+
+
+ -800
+ ISO-8601 offset for PST
+
+
+ -8
+ ISO-8601 offset for PST
+
+
+
+
+ interval
+
intervals can be specified with the following syntax:
-
+
+
Quantity Unit [Quantity Unit...] [Direction]
@ Quantity Unit [Direction]
-
- where: Quantity is ..., -1,
- 0, 1, 2, ...;
- Unit is second,
- minute, hour, day,
- week, month, year,
- decade, century, millennium,
- or abbreviations or plurals of these units;
- Direction can be ago or
- empty.
-
-
+
+
+ where: Quantity is ..., -1,
+ 0, 1, 2, ...;
+ Unit is second,
+ minute, hour, day,
+ week, month, year,
+ decade, century, millennium,
+ or abbreviations or plurals of these units;
+ Direction can be ago or
+ empty.
+
+
- Special values
-
- The following SQL-compatible functions can be used as date or time
- input for the corresponding datatype: CURRENT_DATE,
- CURRENT_TIME, CURRENT_TIMESTAMP.
-
-
- PostgreSQL also supports several special constants for
- convenience.
+ Special values
+
+
+ The following SQL-compatible functions can be used as date or time
+ input for the corresponding datatype: CURRENT_DATE,
+ CURRENT_TIME, CURRENT_TIMESTAMP.
+
+
+ Postgres also supports several special constants for
+ convenience.
- PostgresSQL Special Date/Time Constants
+ Postgres Special Date/Time ConstantsConstants
@@ -1110,7 +1117,7 @@ January 8 04:05:06 1999 PST
The default is the ISO format.
- PostgreSQL Date/Time Output Styles
+ Postgres Date/Time Output StylesStyles
@@ -1148,7 +1155,7 @@ January 8 04:05:06 1999 PST
The output of the date and time styles is of course
- only the date or time part in accordance with the above examples
+ only the date or time part in accordance with the above examples.
@@ -1157,22 +1164,25 @@ January 8 04:05:06 1999 PST
at Date/Time Input, how this setting affects interpretation of input values.)
- PostgreSQL Date Order Conventions
- Order
+ Postgres Date Order Conventions
+ Date OrderStyle Specification
+ DescriptionExampleEuropean
+ day/month/year17/12/1997 15:37:16.00 METUS
+ month/day/year12/17/1997 07:37:16.00 PST
@@ -1181,9 +1191,10 @@ January 8 04:05:06 1999 PST
- interval output looks like the input format, expect that units like
+ interval output looks like the input format, except that units like
week or century are converted to years and days.
In ISO mode the output looks like
+
[ Quantity Units [ ... ] ] [ Days ] Hours:Minutes [ ago ]
@@ -1219,7 +1230,7 @@ January 8 04:05:06 1999 PST
Time Zones
- PostgreSQL endeavors to be compatible with
+ Postgres endeavors to be compatible with
SQL92 definitions for typical usage.
However, the SQL92 standard has an odd mix of date and
time types and capabilities. Two obvious problems are:
@@ -1249,7 +1260,7 @@ January 8 04:05:06 1999 PST
- To address these difficulties, PostgreSQL
+ To address these difficulties, Postgres
associates time zones only with date and time
types which contain both date and time,
and assumes local time for any type containing only
@@ -1260,7 +1271,7 @@ January 8 04:05:06 1999 PST
- PostgreSQL obtains time zone support
+ Postgres obtains time zone support
from the underlying operating system for dates between 1902 and
2038 (near the typical date limits for Unix-style
systems). Outside of this range, all dates are assumed to be
@@ -1322,7 +1333,7 @@ January 8 04:05:06 1999 PST
Internals
- PostgreSQL uses Julian dates
+ Postgres uses Julian dates
for all date/time calculations. They have the nice property of correctly
predicting/calculating any date more recent than 4713BC
to far into the future, using the assumption that the length of the
@@ -1476,13 +1487,32 @@ January 8 04:05:06 1999 PST
point is specified using the following syntax:
-
-( x , y )
- x , y
-where
- x is the x-axis coordinate as a floating point number
- y is the y-axis coordinate as a floating point number
-
+
+( x , y )
+ x , y
+
+
+ where the arguments are
+
+
+
+ x
+
+
+ The x-axis coordinate as a floating point number.
+
+
+
+
+
+ y
+
+
+ The y-axis coordinate as a floating point number.
+
+
+
+
@@ -1495,13 +1525,26 @@ where
lseg is specified using the following syntax:
-
-( ( x1 , y1 ) , ( x2 , y2 ) )
- ( x1 , y1 ) , ( x2 , y2 )
- x1 , y1 , x2 , y2
-where
- (x1,y1) and (x2,y2) are the endpoints of the segment
-
+
+
+( ( x1 , y1 ) , ( x2 , y2 ) )
+ ( x1 , y1 ) , ( x2 , y2 )
+ x1 , y1 , x2 , y2
+
+
+ where the arguments are
+
+
+
+ (x1,y1)
+ (x2,y2)
+
+
+ The endpoints of the line segment.
+
+
+
+
@@ -1516,14 +1559,28 @@ where
box is specified using the following syntax:
-
-( ( x1 , y1 ) , ( x2 , y2 ) )
- ( x1 , y1 ) , ( x2 , y2 )
- x1 , y1 , x2 , y2
-where
- (x1,y1) and (x2,y2) are opposite corners
-
+
+( ( x1 , y1 ) , ( x2 , y2 ) )
+ ( x1 , y1 ) , ( x2 , y2 )
+ x1 , y1 , x2 , y2
+
+ where the arguments are
+
+
+
+ (x1,y1)
+ (x2,y2)
+
+
+ Opposite corners of the box.
+
+
+
+
+
+
+
Boxes are output using the first syntax.
The corners are reordered on input to store
the lower left corner first and the upper right corner last.
@@ -1546,24 +1603,37 @@ where
isopen(p)
and
isclosed(p)
- are supplied to select either type in a query.
+ are supplied to test for either type in a query.
path is specified using the following syntax:
-
-( ( x1 , y1 ) , ... , ( xn , yn ) )
-[ ( x1 , y1 ) , ... , ( xn , yn ) ]
- ( x1 , y1 ) , ... , ( xn , yn )
- ( x1 , y1 , ... , xn , yn )
- x1 , y1 , ... , xn , yn
-where
- (x1,y1),...,(xn,yn) are points 1 through n
- a leading "[" indicates an open path
- a leading "(" indicates a closed path
-
+
+( ( x1 , y1 ) , ... , ( xn , yn ) )
+[ ( x1 , y1 ) , ... , ( xn , yn ) ]
+ ( x1 , y1 ) , ... , ( xn , yn )
+ ( x1 , y1 , ... , xn , yn )
+ x1 , y1 , ... , xn , yn
+
+ where the arguments are
+
+
+
+ (x,y)
+
+
+ Endpoints of the line segments comprising the path.
+ A leading square bracket ("[") indicates an open path, while
+ a leading parenthesis ("(") indicates a closed path.
+
+
+
+
+
+
+
Paths are output using the first syntax.
Note that Postgres versions prior to
v6.1 used a format for paths which had a single leading parenthesis,
@@ -1587,19 +1657,33 @@ where
polygon is specified using the following syntax:
-
-( ( x1 , y1 ) , ... , ( xn , yn ) )
- ( x1 , y1 ) , ... , ( xn , yn )
- ( x1 , y1 , ... , xn , yn )
- x1 , y1 , ... , xn , yn
-where
- (x1,y1),...,(xn,yn) are points 1 through n
-
+
+( ( x1 , y1 ) , ... , ( xn , yn ) )
+ ( x1 , y1 ) , ... , ( xn , yn )
+ ( x1 , y1 , ... , xn , yn )
+ x1 , y1 , ... , xn , yn
+
+ where the arguments are
+
+
+
+ (x,y)
+
+
+ Endpoints of the line segments comprising the boundary of the
+ polygon.
+
+
+
+
+
+
+
Polygons are output using the first syntax.
Note that Postgres versions prior to
v6.1 used a format for polygons which had a single leading parenthesis, the list
- of x-axis coordinates, the list of y-axis coordinates,
+ of x-axis coordinates, the list of y-axis coordinates,
followed by a closing parenthesis.
The built-in function upgradepoly is supplied to convert
polygons dumped and reloaded from pre-v6.1 databases.
@@ -1616,16 +1700,37 @@ where
circle is specified using the following syntax:
-
-< ( x , y ) , r >
-( ( x , y ) , r )
- ( x , y ) , r
- x , y , r
-where
- (x,y) is the center of the circle
- r is the radius of the circle
-
+
+< ( x , y ) , r >
+( ( x , y ) , r )
+ ( x , y ) , r
+ x , y , r
+
+ where the arguments are
+
+
+
+ (x,y)
+
+
+ Center of the circle.
+
+
+
+
+
+ r
+
+
+ Radius of the circle.
+
+
+
+
+
+
+
Circles are output using the first syntax.
diff --git a/doc/src/sgml/datetime.sgml b/doc/src/sgml/datetime.sgml
index bfc3666a9fe..dcc63d55646 100644
--- a/doc/src/sgml/datetime.sgml
+++ b/doc/src/sgml/datetime.sgml
@@ -1,5 +1,5 @@
@@ -645,7 +645,7 @@ Date/time details
- Julian Day is different from Julian Date.
+ "Julian Day" is different from "Julian Date".
The Julian calendar was introduced by Julius Caesar in 45 BC. It was
in common use until the 1582, when countries started changing to the
diff --git a/doc/src/sgml/dfunc.sgml b/doc/src/sgml/dfunc.sgml
index c7a0bba563a..ccdb25e60c1 100644
--- a/doc/src/sgml/dfunc.sgml
+++ b/doc/src/sgml/dfunc.sgml
@@ -1,5 +1,5 @@
@@ -7,105 +7,6 @@ $Header: /cvsroot/pgsql/doc/src/sgml/dfunc.sgml,v 1.9 2000/03/31 03:27:40 thomas
-
-
After you have created and registered a user-defined
function, your work is essentially done.
Postgres,
@@ -120,8 +21,6 @@ procedure.
describes how to perform the compilation and
link-editing required before you can load your user-defined
functions into a running Postgres server.
- Note that
- this process has changed as of Version 4.2.
DEC OSF/1
@@ -327,14 +251,15 @@ procedure.
file with special compiler flags and a shared library
must be produced.
The necessary steps with HP-UX are as follows. The +z
- flag to the HP-UX C compiler produces so-called
- "Position Independent Code" (PIC) and the +u flag
- removes
+ flag to the HP-UX C compiler produces
+ Position Independent Code (PIC)
+ and the +u flag removes
some alignment restrictions that the PA-RISC architecture
normally enforces. The object file must be turned
into a shared library using the HP-UX link editor with
the -b option. This sounds complicated but is actually
very simple, since the commands to do it are just:
+
# simple HP-UX example
% cc +z +u -c foo.c
@@ -375,6 +300,95 @@ procedure.
command line.
+
+
+
@@ -45,7 +23,7 @@ Add a note on sgml-tools that they are now working with jade and so
The purpose of documentation is to make Postgres
- easier to learn, use, and develop.
+ easier to learn, use, and extend..
The documentation set should describe the Postgres
system, language, and interfaces.
It should be able to answer
@@ -61,18 +39,26 @@ Add a note on sgml-tools that they are now working with jade and so
formats:
-
+
+
Plain text for pre-installation information.
-
-
+
+
+
+
HTML, for on-line browsing and reference.
-
-
- Hardcopy, for in-depth reading and reference.
-
-
+
+
+
+
+ Hardcopy (Postscript or PDF), for in-depth reading and reference.
+
+
+
+
man pages, for quick reference.
-
+
+
@@ -983,7 +969,7 @@ $ make man
- Hardcopy Generation for v6.5
+ Hardcopy Generation for v7.0
The hardcopy Postscript documentation is generated by converting the
@@ -1084,14 +1070,14 @@ $ make man
- Export the result as ASCII Layout.
+ Export the result as "ASCII Layout".
Using emacs or vi, clean up the tabular information in
- INSTALL. Remove the mailto
+ INSTALL. Remove the "mailto"
URLs for the porting contributors to shrink
the column heights.
@@ -1104,19 +1090,21 @@ $ make man
Several areas are addressed while generating Postscript
- hardcopy.
+ hardcopy, including RTF repair, ToC generation, and page break
+ adjustments.
Applixware RTF Cleanup
- Applixware does not seem to do a complete job of importing RTF
- generated by jade/MSS. In particular, all text is given the
- Header1 style attribute label, although the text
- formatting itself is acceptable. Also, the Table of Contents page
- numbers do not refer to the section listed in the table, but rather
- refer to the page of the ToC itself.
+ jade, an integral part of the
+ hardcopy procedure, omits specifying a default style for body
+ text. In the past, this undiagnosed problem led to a long process
+ of Table of Contents (ToC) generation. However, with great help
+ from the ApplixWare folks the symptom was diagnosed and a
+ workaround is available.
+
@@ -1128,6 +1116,35 @@ $ make man
+
+
+
+ Repair the RTF file to correctly specify all
+ styles, in particular the default style. The field can be added
+ using vi or the following small
+ sed procedure:
+
+
+#!/bin/sh
+# fixrtf.sh
+# Utility to repair slight damage in RTF files generated by jade
+# Thomas Lockhart <lockhart@alumni.caltech.edu>
+#
+for i in $* ; do
+ mv $i $i.orig
+ cat $i.orig | sed 's#\\stylesheet#\\stylesheet{\\s0 Normal;}#' > $i
+done
+
+exit
+
+
+ where the script is adding {\s0 Normal;} as
+ the zero-th style in the document. According to ApplixWare, the
+ RTF standard would prohibit adding an implicit zero-th style,
+ though M$Word happens to handle this case.
+
+
+
Open a new document in Applix Words and
@@ -1137,55 +1154,152 @@ $ make man
- Print out the existing Table of Contents, to mark up in the following
- few steps.
+ Generate a new ToC using ApplixWare.
+
+
+
+
+
+ Select the existing ToC lines, from the beginning of the first
+ character on the first line to the last character of the last
+ line.
+
+
+
+
+
+ Build a new ToC using
+ Tools.BookBuilding.CreateToC. Select the
+ first three levels of headers for inclusion in the ToC.
+ This will
+ replace the existing lines imported in the RTF with a native
+ ApplixWare ToC.
+
+
+
+
+
+ Adjust the ToC formatting by using
+ Format.Style, selecting each of the three
+ ToC styles, and adjusting the indents for First and
+ Left. Use the following values:
+
+
+ Directory Layout
+
+ Kerberos
+
+
+
+
+ Directory
+ Description
+
+
+
+
+ Directory
+ Description
+
+
+ input
+
+ Source files that are converted using
+ make all into
+ some of the .sql files in the
+ sql subdirectory.
+
+
+
+
+ output
+
+ Source files that are converted using
+ make all into
+ .out files in the
+ expected subdirectory.
+
+
+
+
+ sql
+
+ .sql files used to perform the
+ regression tests.
+
+
+
+
+ expected
+
+ .out files that represent what we
+ expect the results to
+ look like.
+
+
+
+
+ results
+
+ .out files that contain what the results
+ actually look
+ like. Also used as temporary storage for table copy testing.
+
+
+
+
+ tmp_check
+
+ Temporary installation created by parallel testing script.
+
+
+
+
+
+
-
- Directory Layout
+
+ Regression Test Procedure
-
-
-
- This should become a table in the previous section.
-
-
-
-
-
-
- input/ .... .source files that are converted using 'make all' into
- some of the .sql files in the 'sql' subdirectory
-
- output/ ... .source files that are converted using 'make all' into
- .out files in the 'expected' subdirectory
-
- sql/ ...... .sql files used to perform the regression tests
-
- expected/ . .out files that represent what we *expect* the results to
- look like
-
- results/ .. .out files that contain what the results *actually* look
- like. Also used as temporary storage for table copy testing.
-
- tmp_check/ temporary installation created by parallel testing script.
-
-
-
-
-
- Regression Test Procedure
-
-
+
Commands were tested on RedHat Linux version 4.2 using the bash shell.
Except where noted, they will probably work on most systems. Commands
- like ps and tar vary wildly on what options you should use on each
- platform. Use common sense before typing in these commands.
-
+ like ps and tar vary
+ wildly on what options you should use on each
+ platform. Use common sense before typing in these commands.
+
-
- Postgres Regression Test
+
+ Postgres Regression Test
-
-
+
+
Prepare the files needed for the regression test with:
-
+
cd /usr/src/pgsql/src/test/regress
gmake clean
gmake all
-
+
You can skip "gmake clean" if this is the first time you
are running the tests.
-
- This step compiles a C
+
+ This step compiles a C
program with PostgreSQL extension functions into a shared library.
Localized SQL scripts and output-comparison files are also created
for the tests that need them. The localization replaces macros in
the source files with absolute pathnames and user names.
-
+
-
-
+
+
If you intend to use the "sequential" test procedure, which tests
an already-installed postmaster, be sure that the postmaster
is running. If it isn't already running,
start the postmaster in an available window by typing
-
+
postmaster
-
+
or start the postmaster daemon running in the background by typing
-
+
cd
nohup postmaster > regress.log 2>&1 &
-
+
The latter is probably preferable, since the regression test log
will be quite lengthy (60K or so, in
- Postgres 7.0) and you might want to
+ Postgres 7.0) and you might want to
review it for clues if things go wrong.
-
-
- Do not run postmaster from the root account.
-
-
-
-
+
+
+ Do not run postmaster from the root account.
+
+
+
+
-
-
+
+
Run the regression tests. For a sequential test, type
-
+
cd /usr/src/pgsql/src/test/regress
gmake runtest
-
+
For a parallel test, type
-
+
cd /usr/src/pgsql/src/test/regress
gmake runcheck
-
+
The sequential test just runs the test scripts using your
already-running postmaster.
The parallel test will perform a complete installation of
- Postgres into a temporary directory,
+ Postgres into a temporary directory,
start a private postmaster therein, and then run the test scripts.
Finally it will kill the private postmaster (but the temporary
directory isn't removed automatically).
-
-
+
+
-
-
+
+
You should get on the screen (and also written to file ./regress.out)
a series of statements stating which tests passed and which tests
failed. Please note that it can be normal for some of the tests to
"fail" due to platform-specific variations. See the next section
for details on determining whether a "failure" is significant.
-
-
+
+
Some of the tests, notably "numeric", can take a while, especially
on slower platforms. Have patience.
-
-
+
+
-
-
+
+
After running the tests and examining the results, type
-
+
cd /usr/src/pgsql/src/test/regress
gmake clean
-
+
to recover the temporary disk space used by the tests.
If you ran a sequential test, also type
-
+
dropdb regression
-
-
-
+
+
+
-
+
-
- Regression Analysis
+
+ Regression Analysis
-
+
The actual outputs of the regression tests are in files in the
./results directory. The test script
uses diff to compare each output file
@@ -270,101 +317,101 @@ The runtime path is /usr/local/pgsql (other paths are possible).
saved for your inspection in
./regression.diffs. (Or you can run
diff yourself, if you prefer.)
-
+
-
+
The files might not compare exactly. The test script will report
any difference as a "failure", but the difference might be due
to small cross-system differences in error message wording,
math library behavior, etc.
"Failures" of this type do not indicate a problem with
- Postgres.
-
+ Postgres.
+
-
+
Thus, it is necessary to examine the actual differences for each
"failed" test to determine whether there is really a problem.
The following paragraphs attempt to provide some guidance in
determining whether a difference is significant or not.
-
+
-
- Error message differences
+
+ Error message differences
-
+
Some of the regression tests involve intentional invalid input values.
Error messages can come from either the Postgres code or from the host
platform system routines. In the latter case, the messages may vary
between platforms, but should reflect similar information. These
differences in messages will result in a "failed" regression test which
can be validated by inspection.
-
+
-
+
-
- Date and time differences
+
+ Date and time differences
-
+
Most of the date and time results are dependent on timezone environment.
The reference files are generated for timezone PST8PDT (Berkeley,
California) and there will be apparent failures if the tests are not
run with that timezone setting. The regression test driver sets
environment variable PGTZ to PST8PDT to ensure proper results.
-
+
-
+
Some of the queries in the "timestamp" test will fail if you run
the test on the day of a daylight-savings time changeover, or the
day before or after one. These queries assume that the intervals
between midnight yesterday, midnight today and midnight tomorrow are
exactly twenty-four hours ... which is wrong if daylight-savings time
went into or out of effect meanwhile.
-
+
-
+
There appear to be some systems which do not accept the recommended syntax
for explicitly setting the local time zone rules; you may need to use
a different PGTZ setting on such machines.
-
+
-
+
Some systems using older timezone libraries fail to apply daylight-savings
corrections to pre-1970 dates, causing pre-1970 PDT times to be displayed
in PST instead. This will result in localized differences in the test
results.
-
+
-
+
-
- Floating point differences
+
+ Floating point differences
-
- Some of the tests involve computing 64-bit (float8) numbers from table
+
+ Some of the tests involve computing 64-bit (float8) numbers from table
columns. Differences in results involving mathematical functions of
- float8 columns have been observed. The float8
+ float8 columns have been observed. The float8
and geometry tests are particularly prone to small differences
across platforms.
Human eyeball comparison is needed to determine the real significance
of these differences which are usually 10 places to the right of
the decimal point.
-
+
-
+
Some systems signal errors from pow() and exp() differently from
the mechanism expected by the current Postgres code.
-
+
-
+
-
- Polygon differences
+
+ Polygon differences
-
+
Several of the tests involve operations on geographic date about the
Oakland/Berkley CA street map. The map data is expressed as polygons
- whose vertices are represented as pairs of float8 numbers (decimal
+ whose vertices are represented as pairs of float8 numbers (decimal
latitude and longitude). Initially, some tables are created and
loaded with geographic data, then some views are created which join
two tables using the polygon intersection operator (##), then a select
@@ -374,65 +421,65 @@ The runtime path is /usr/local/pgsql (other paths are possible).
in the 2nd or 3rd place to the right of the decimal point. The SQL
statements where these problems occur are the following:
-
+
QUERY: SELECT * from street;
QUERY: SELECT * from iexit;
-
-
+
+
-
+
-
- Random differences
+
+ Random differences
-
+
There is at least one case in the "random" test script that is
intended to produce
random results. This causes random to fail the regression test
once in a while (perhaps once in every five to ten trials).
Typing
-
+
diff results/random.out expected/random.out
-
+
should produce only one or a few lines of differences. You need
not worry unless the random test always fails in repeated attempts.
(On the other hand, if the random test is never
reported to fail even in many trials of the regress tests, you
probably should worry.)
-
+
-
+
-
- The expected files
+
+ The "expected" files
-
- The ./expected/*.out files were adapted from the original monolithic
- expected.input file provided by Jolly Chen et al. Newer versions of these
+
+ The ./expected/*.out files were adapted from the original monolithic
+ expected.input file provided by Jolly Chen et al. Newer versions of these
files generated on various development machines have been substituted after
careful (?) inspection. Many of the development machines are running a
Unix OS variant (FreeBSD, Linux, etc) on Ix86 hardware.
- The original expected.input file was created on a SPARC Solaris 2.4
- system using the postgres5-1.02a5.tar.gz source tree. It was compared
+ The original expected.input file was created on a SPARC Solaris 2.4
+ system using the postgres5-1.02a5.tar.gz source tree. It was compared
with a file created on an I386 Solaris 2.4 system and the differences
were only in the floating point polygons in the 3rd digit to the right
of the decimal point.
- The original sample.regress.out file was from the postgres-1.01 release
+ The original sample.regress.out file was from the postgres-1.01 release
constructed by Jolly Chen. It may
- have been created on a DEC ALPHA machine as the Makefile.global
+ have been created on a DEC ALPHA machine as the Makefile.global
in the postgres-1.01 release has PORTNAME=alpha.
-
+
-
+
-
+
-
- Platform-specific comparison files
+
+ Platform-specific comparison files
-
+
Since some of the tests inherently produce platform-specific results,
we have provided a way to supply platform-specific result comparison
files. Frequently, the same variation applies to multiple platforms;
@@ -441,42 +488,59 @@ The runtime path is /usr/local/pgsql (other paths are possible).
So, to eliminate bogus test "failures" for a particular platform,
you must choose or make a variant result file, and then add a line
to the mapping file, which is "resultmap".
-
+
-
+
Each line in the mapping file is of the form
-
+
testname/platformnamepattern=comparisonfilename
-
+
The test name is just the name of the particular regression test module.
The platform name pattern is a pattern in the style of expr(1) (that is,
a regular expression with an implicit ^ anchor at the start). It is matched
against the platform name as printed by config.guess. The comparison
file name is the name of the substitute result comparison file.
-
+
-
+
For example: the int2 regress test includes a deliberate entry of a value
that is too large to fit in int2. The specific error message that is
produced is platform-dependent; our reference platform emits
-
+
ERROR: pg_atoi: error reading "100000": Numerical result out of range
-
+
but a fair number of other Unix platforms emit
-
+
ERROR: pg_atoi: error reading "100000": Result too large
-
+
Therefore, we provide a variant comparison file, int2-too-large.out,
that includes this spelling of the error message. To silence the
bogus "failure" message on HPPA platforms, resultmap includes
-
+
int2/hppa=int2-too-large
-
+
which will trigger on any machine for which config.guess's output
begins with 'hppa'. Other lines in resultmap select the variant
comparison file for other platforms where it's appropriate.
-
+
-
+
-
+
+
+
diff --git a/doc/src/sgml/release.sgml b/doc/src/sgml/release.sgml
index 7b93b645c0f..0df2c3ba832 100644
--- a/doc/src/sgml/release.sgml
+++ b/doc/src/sgml/release.sgml
@@ -1,5 +1,5 @@
@@ -19,9 +19,11 @@ $Header: /cvsroot/pgsql/doc/src/sgml/release.sgml,v 1.46 2000/05/02 17:06:10 mom
-->
- This release shows the continued growth of PostgreSQL. There are more
- changes in 7.0 than in any previous release. Don't be concerned this is
- a dot-zero release. We do our best to put out only solid releases, and
+ This release contains improvements in many areas, demonstrating
+ the continued growth of PostgreSQL.
+ There are more improvements and fixes in 7.0 than in any previous
+ release. The developers have confidence that this is the best
+ release yet; we do our best to put out only solid releases, and
this one is no exception.
@@ -49,7 +51,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/release.sgml,v 1.46 2000/05/02 17:06:10 mom
Continuing on work started a year ago, the optimizer has been
- overhauled, allowing improved query execution and better performance
+ improved, allowing better query plan selection and faster performance
with less memory usage.
@@ -80,7 +82,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/release.sgml,v 1.46 2000/05/02 17:06:10 mom
-
+
@@ -102,10 +105,75 @@ $Header: /cvsroot/pgsql/doc/src/sgml/release.sgml,v 1.46 2000/05/02 17:06:10 mom
A dump/restore using pg_dump
is required for those wishing to migrate data from any
previous release of Postgres.
- For those upgrading from 6.5.*, you can use
+ For those upgrading from 6.5.*, you may instead use
pg_upgrade to upgrade to this
- release.
+ release; however, a full dump/reload installation is always the
+ most robust method for upgrades.
+
+
+ Interface and compatibility issues to consider for the new
+ release include:
+
+
+
+
+ The date/time types datetime and
+ timespan have been superceded by the
+ SQL92-defined types timestamp and
+ interval. Although there has been some effort to
+ ease the transition by allowing
+ Postgres to recognize
+ the deprecated type names and translate them to the new type
+ names, this mechanism may not be completely transparent to
+ your existing application.
+
+
+
+
+
+
+
+ The optimizer has been substantially improved in the area of
+ query cost estimation. In some cases, this will result in
+ decreased query times as the optimizer makes a better choice
+ for the preferred plan. However, in a small number of cases,
+ usually involving pathological distributions of data, your
+ query times may go up. If you are dealing with large amounts
+ of data, you may want to check your queries to verify
+ performance.
+
+
+
+
+
+ The JDBC and ODBC
+ interfaces have been upgraded and extended.
+
+
+
+
+
+ The string function CHAR_LENGTH is now a
+ native function. Previous versions translated this into a call
+ to LENGTH, which could result in
+ ambiguity with other types implementing
+ LENGTH such as the geometric types.
+
+
+
+
+
@@ -114,7 +182,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/release.sgml,v 1.46 2000/05/02 17:06:10 mom
Bug Fixes
---------
-Prevent function calls with more than maximum number of arguments (Tom)
+Prevent function calls exceeding maximum number of arguments (Tom)
Improve CASE construct (Tom)
Fix SELECT coalesce(f1,0) FROM int4_tbl GROUP BY f1 (Tom)
Fix SELECT sentence.words[0] FROM sentence GROUP BY sentence.words[0] (Tom)
@@ -125,15 +193,15 @@ Fix for SELECT a/2, a/2 FROM test_missing_target GROUP BY a/2 (Tom)
Fix for subselects in INSERT ... SELECT (Tom)
Prevent INSERT ... SELECT ... ORDER BY (Tom)
Fixes for relations greater than 2GB, including vacuum
-Improve communication of system table changes to other running backends (Tom)
-Improve communication of user table modifications to other running backends (Tom)
+Improve propagating system table changes to other backends (Tom)
+Improve propagating user table changes to other backends (Tom)
Fix handling of temp tables in complex situations (Bruce, Tom)
-Allow table locking when tables opened, improving concurrent reliability (Tom)
+Allow table locking at table open, improving concurrent reliability (Tom)
Properly quote sequence names in pg_dump (Ross J. Reedstrom)
Prevent DROP DATABASE while others accessing
Prevent any rows from being returned by GROUP BY if no rows processed (Tom)
Fix SELECT COUNT(1) FROM table WHERE ...' if no rows matching WHERE (Tom)
-Fix pg_upgrade so it works for MVCC(Tom)
+Fix pg_upgrade so it works for MVCC (Tom)
Fix for SELECT ... WHERE x IN (SELECT ... HAVING SUM(x) > 1) (Tom)
Fix for "f1 datetime DEFAULT 'now'" (Tom)
Fix problems with CURRENT_DATE used in DEFAULT (Tom)
@@ -141,8 +209,8 @@ Allow comment-only lines, and ;;; lines too. (Tom)
Improve recovery after failed disk writes, disk full (Hiroshi)
Fix cases where table is mentioned in FROM but not joined (Tom)
Allow HAVING clause without aggregate functions (Tom)
-Fix for "--" comment and no trailing newline, as seen in Perl
-Improve pg_dump failure error reports (Bruce)
+Fix for "--" comment and no trailing newline, as seen in perl interface
+Improve pg_dump failure error reports (Bruce)
Allow sorts and hashes to exceed 2GB file sizes (Tom)
Fix for pg_dump dumping of inherited rules (Tom)
Fix for NULL handling comparisons (Tom)
@@ -197,8 +265,7 @@ Update jdbc protocol to 2.0 (Jens GlaserMike Mascari)
libpq's PQsetNoticeProcessor function now returns previous hook(Peter E)
@@ -227,8 +293,7 @@ Change backend-side COPY to write files with permissions 644 not 666 (Tom)
Force permissions on PGDATA directory to be secure, even if it exists (Tom)
Added psql LASTOID variable to return last inserted oid (Peter E)
Allow concurrent vacuum and remove pg_vlock vacuum lock file (Tom)
-Add permissions check so only Postgres superuser or table owner can
-vacuum (Peter E)
+Add permissions check for vacuum (Peter E)
New libpq functions to allow asynchronous connections: PQconnectStart(),
PQconnectPoll(), PQresetStart(), PQresetPoll(), PQsetenvStart(),
PQsetenvPoll(), PQsetenvAbort (Ewan Mellor)
@@ -236,8 +301,8 @@ New libpq PQsetenv() function (Ewan Mellor)
create/alter user extension (Peter E)
New postmaster.pid and postmaster.opts under $PGDATA (Tatsuo)
New scripts for create/drop user/db (Peter E)
-Major psql overhaul(Peter E)
-Add const to libpq interface(Peter E)
+Major psql overhaul (Peter E)
+Add const to libpq interface (Peter E)
New libpq function PQoidValue (Peter E)
Show specific non-aggregate causing problem with GROUP BY (Tom)
Make changes to pg_shadow recreate pg_pwd file (Peter E)
@@ -281,12 +346,11 @@ Allow SELECT .. FOR UPDATE in PL/pgSQL (Hiroshi)
Enable backward sequential scan even after reaching EOF (Hiroshi)
Add btree indexing of boolean values, >= and <= (Don Baccus)
Print current line number when COPY FROM fails (Massimo)
-Recognize special case of POSIX time zone: "GMT+8" and "GMT-8" (Thomas)
-Add DEC as synonym for "DECIMAL" (Thomas)
+Recognize POSIX time zone e.g. "PST+8" and "GMT-8" (Thomas)
+Add DEC as synonym for DECIMAL (Thomas)
Add SESSION_USER as SQL92 keyword, same as CURRENT_USER (Thomas)
-Implement column aliases (aka correlation names) and join syntax (Thomas)
-Allow queries like SELECT a FROM t1 tx (a) (Thomas)
-Allow queries like SELECT * FROM t1 NATURAL JOIN t2 (Thomas)
+Implement SQL92 column aliases (aka correlation names) (Thomas)
+Implement SQL92 join syntax (Thomas)
Make INTERVAL reserved word allowed as a column identifier (Thomas)
Implement REINDEX command (Hiroshi)
Accept ALL in aggregate function SUM(ALL col) (Tom)
@@ -322,9 +386,8 @@ Allow bare column names to be subscripted as arrays (Tom)
Improve type casting of int and float constants (Tom)
Cleanups for int8 inputs, range checking, and type conversion (Tom)
Fix for SELECT timespan('21:11:26'::time) (Tom)
-Fix for netmask('x.x.x.x/0') is 255.255.255.255 instead of 0.0.0.0
- (Oleg Sharoiko)
-Add btree index on NUMERIC(Jan)
+netmask('x.x.x.x/0') is 255.255.255.255 instead of 0.0.0.0 (Oleg Sharoiko)
+Add btree index on NUMERIC (Jan)
Perl fix for large objects containing NUL characters (Douglas Thomson)
ODBC fix for for large objects (free)
Fix indexing of cidr data type
@@ -338,26 +401,25 @@ Make char_length()/octet_length including trailing blanks (Tom)
Made abstime/reltime use int4 instead of time_t (Peter E)
New lztext data type for compressed text fields
Revise code to handle coercion of int and float constants (Tom)
-New C-routines to implement a BIT and BIT VARYING type in /contrib
- (Adriaan Joubert)
+Start at new code to implement a BIT and BIT VARYING type (Adriaan Joubert)
NUMERIC now accepts scientific notation (Tom)
NUMERIC to int4 rounds (Tom)
Convert float4/8 to NUMERIC properly (Tom)
Allow type conversion with NUMERIC (Thomas)
Make ISO date style (2000-02-16 09:33) the default (Thomas)
-Add NATIONAL CHAR [ VARYING ]
+Add NATIONAL CHAR [ VARYING ] (Thomas)
Allow NUMERIC round and trunc to accept negative scales (Tom)
New TIME WITH TIME ZONE type (Thomas)
Add MAX()/MIN() on time type (Thomas)
Add abs(), mod(), fac() for int8 (Thomas)
-Add round(), sqrt(), cbrt(), pow()
-Rename NUMERIC power() to pow()
-Improved TRANSLATE() function
+Rename functions to round(), sqrt(), cbrt(), pow() for float8 (Thomas)
+Add transcendental math functions (e.g. sin(), acos()) for float8 (Thomas)
+Add exp() and ln() for NUMERIC type
+Rename NUMERIC power() to pow() (Thomas)
+Improved TRANSLATE() function (Edwin Ramirez, Tom)
Allow X=-Y operators (Tom)
-Add exp() and ln() as NUMERIC types
-Allow SELECT float8(COUNT(*)) / (SELECT COUNT(*) FROM int4_tbl) FROM int4_tbl
- GROUP BY f1; (Tom)
-Allow LOCALE to use indexes in regular expression searches(Tom)
+Allow SELECT float8(COUNT(*))/(SELECT COUNT(*) FROM t) FROM t GROUP BY f1; (Tom)
+Allow LOCALE to use indexes in regular expression searches (Tom)
Allow creation of functional indexes to use default types
Performance
@@ -378,13 +440,12 @@ Prefer index scans in cases where ORDER BY/GROUP BY is required (Tom)
Allocate large memory requests in fix-sized chunks for performance (Tom)
Fix vacuum's performance by reducing memory allocation requests (Tom)
Implement constant-expression simplification (Bernard Frankpitt, Tom)
-Allow more than first column to be used to determine start of index scan
- (Hiroshi)
+Use secondary columns to be used to determine start of index scan (Hiroshi)
Prevent quadruple use of disk space when doing internal sorting (Tom)
Faster sorting by calling fewer functions (Tom)
Create system indexes to match all system caches (Bruce, Hiroshi)
-Make system caches use system indexes(Bruce)
-Make all system indexes unique(Bruce)
+Make system caches use system indexes (Bruce)
+Make all system indexes unique (Bruce)
Improve pg_statistics management for VACUUM speed improvement (Tom)
Flush backend cache less frequently (Tom, Hiroshi)
COPY now reuses previous memory allocation, improving performance (Tom)
@@ -398,17 +459,17 @@ New SET variable to control optimizer costs (Tom)
Optimizer queries based on LIMIT, OFFSET, and EXISTS qualifications (Tom)
Reduce optimizer internal housekeeping of join paths for speedup (Tom)
Major subquery speedup (Tom)
-Fewer fsync writes when fsync is not disabled(Tom)
-Improved LIKE optimizer estimates(Tom)
-Prevent fsync in SELECT-only queries(Vadim)
-Make index creation use psort code, because it is now faster(Tom)
+Fewer fsync writes when fsync is not disabled (Tom)
+Improved LIKE optimizer estimates (Tom)
+Prevent fsync in SELECT-only queries (Vadim)
+Make index creation use psort code, because it is now faster (Tom)
Allow creation of sort temp tables > 1 Gig
Source Tree Changes
-------------------
Fix for linux PPC compile
New generic expression-tree-walker subroutine (Tom)
-Change form() to varargform() to prevent portability problems.
+Change form() to varargform() to prevent portability problems
Improved range checking for large integers on Alphas
Clean up #include in /include directory (Bruce)
Add scripts for checking includes (Bruce)
@@ -418,9 +479,9 @@ Enable WIN32 compilation of libpq
Alpha spinlock fix from Uncle George
Overhaul of optimizer data structures (Tom)
Fix to cygipc library (Yutaka Tanida)
-Allow pgsql to work on newer Cygwin snapshots(Dan)
+Allow pgsql to work on newer Cygwin snapshots (Dan)
New catalog version number (Tom)
-Add Linux ARM.
+Add Linux ARM
Rename heap_replace to heap_update
Update for QNX (Dr. Andreas Kardos)
New platform-specific regression handling (Tom)
@@ -1636,7 +1697,7 @@ Support for client-side environment variables to specify time zone and date styl
Socket interface for client/server connection. This is the default now
so you may need to start postmaster with the
--i flag.
+ flag.
@@ -1646,11 +1707,12 @@ Better password authorization mechanisms. Default table permissions have changed
-
-
-Old-style time travel has been removed. Performance has been improved.
-
-
+
+
+ Old-style time travel
+ has been removed. Performance has been improved.
+
+
diff --git a/doc/src/sgml/rules.sgml b/doc/src/sgml/rules.sgml
index 2431ffd697a..f17cfd77b09 100644
--- a/doc/src/sgml/rules.sgml
+++ b/doc/src/sgml/rules.sgml
@@ -854,18 +854,18 @@
There was a long time where the Postgres
rule system was considered broken. The use of rules was not
- recommended and the only part working where view rules. And also
- these view rules made problems because the rule system wasn't able
- to apply them properly on other statements than a SELECT (for
+ recommended and the only part working was view rules. And also
+ these view rules gave problems because the rule system wasn't able
+ to apply them properly on statements other than a SELECT (for
example an UPDATE
that used data from a view didn't work).
- During that time, development moved on and many features where
+ During that time, development moved on and many features were
added to the parser and optimizer. The rule system got more and more
out of sync with their capabilities and it became harder and harder
- to start fixing it. Thus, noone did.
+ to start fixing it. Thus, no one did.
@@ -2088,7 +2088,7 @@ Merge Join
- Another situation are cases on UPDATE where it depends on the
+ Another situation is cases on UPDATE where it depends on the
change of an attribute if an action should be performed or
not. In Postgres version 6.4, the
attribute specification for rule events is disabled (it will have
@@ -2096,7 +2096,7 @@ Merge Join
- stay tuned). So for now the only way to
create a rule as in the shoelace_log example is to do it with
a rule qualification. That results in an extra query that is
- performed allways, even if the attribute of interest cannot
+ performed always, even if the attribute of interest cannot
change at all because it does not appear in the targetlist
of the initial query. When this is enabled again, it will be
one more advantage of rules over triggers. Optimization of
@@ -2108,7 +2108,7 @@ Merge Join
decision. The rule system will know it by looking up the
targetlist and will suppress the additional query completely
if the attribute isn't touched. So the rule, qualified or not,
- will only do it's scan's if there ever could be something to do.
+ will only do its scans if there ever could be something to do.
@@ -2121,3 +2121,20 @@ Merge Join
+
+
diff --git a/doc/src/sgml/runtime.sgml b/doc/src/sgml/runtime.sgml
index c4989e92cf8..dbe984c7b3f 100644
--- a/doc/src/sgml/runtime.sgml
+++ b/doc/src/sgml/runtime.sgml
@@ -1,5 +1,5 @@
@@ -16,7 +16,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/runtime.sgml,v 1.9 2000/04/23 00:25:06 tgl
All Postgres commands that are executed
directly from a Unix shell are
- found in the directory .../bin. Including this directory in
+ found in the directory .../bin. Including this directory in
your search path will make executing the commands easier.
diff --git a/doc/src/sgml/signals.sgml b/doc/src/sgml/signals.sgml
index 23625aed416..7f7e597e0b8 100644
--- a/doc/src/sgml/signals.sgml
+++ b/doc/src/sgml/signals.sgml
@@ -191,7 +191,7 @@ FloatExceptionHandler
-kill(*,signal) means sending a signal to all backends.
+"kill(*,signal)" means sending a signal to all backends.
@@ -247,3 +247,20 @@ cat old_pg_options > $DATA_DIR/pg_options
+
+
diff --git a/doc/src/sgml/spi.sgml b/doc/src/sgml/spi.sgml
index ada3d321edb..d453b1ed70a 100644
--- a/doc/src/sgml/spi.sgml
+++ b/doc/src/sgml/spi.sgml
@@ -157,7 +157,10 @@ Return status
Usage
-XXX thomas 1997-12-24
+
+
diff --git a/doc/src/sgml/sql.sgml b/doc/src/sgml/sql.sgml
index e030d4dbf84..f61b085c2ff 100644
--- a/doc/src/sgml/sql.sgml
+++ b/doc/src/sgml/sql.sgml
@@ -1,5 +1,5 @@
@@ -24,7 +24,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/sql.sgml,v 1.8 2000/04/07 13:30:58 thomas E
SQL has become the most popular relational query
language.
- The name SQL is an abbreviation for
+ The name "SQL" is an abbreviation for
Structured Query Language.
In 1974 Donald Chamberlin and others defined the
language SEQUEL (Structured English Query
@@ -759,8 +759,8 @@ tr(A,B)=t∧tr(C,D)=t
can be formulated using relational algebra can also be formulated
using the relational calculus and vice versa.
This was first proved by E. F. Codd in
- 1972. This proof is based on an algorithm (Codd's reduction
- algorithm) by which an arbitrary expression of the relational
+ 1972. This proof is based on an algorithm ("Codd's reduction
+ algorithm") by which an arbitrary expression of the relational
calculus can be reduced to a semantically equivalent expression of
relational algebra. For a more detailed discussion on that refer to
diff --git a/doc/src/sgml/start.sgml b/doc/src/sgml/start.sgml
index ccb43fbb0f8..d895fe52fb4 100644
--- a/doc/src/sgml/start.sgml
+++ b/doc/src/sgml/start.sgml
@@ -1,5 +1,5 @@
@@ -19,7 +19,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/start.sgml,v 1.10 2000/04/07 13:30:58 thoma
the database directories and started the
postmaster
process. This person does not have to be the Unix
- superuser (root)
+ superuser ("root")
or the computer system administrator; a person can install and use
Postgres without any special accounts or
privileges.
@@ -34,9 +34,9 @@ $Header: /cvsroot/pgsql/doc/src/sgml/start.sgml,v 1.10 2000/04/07 13:30:58 thoma
Throughout this manual, any examples that begin with
- the character % are commands that should be typed
+ the character "%" are commands that should be typed
at the Unix shell prompt. Examples that begin with the
- character * are commands in the Postgres query
+ character "*" are commands in the Postgres query
language, Postgres SQL.
@@ -346,7 +346,7 @@ mydb=>
workspace maintained by the terminal monitor.
The psql program responds to escape
codes that begin
- with the backslash character, \ For example, you
+ with the backslash character, "\" For example, you
can get help on the syntax of various
Postgres SQL
commands by typing:
@@ -364,7 +364,7 @@ mydb=> \g
This tells the server to process the query. If you
- terminate your query with a semicolon, the \g is not
+ terminate your query with a semicolon, the "\g" is not
necessary.
psql will automatically process
semicolon terminated queries.
@@ -386,9 +386,9 @@ mydb=> \q
White space (i.e., spaces, tabs and newlines) may be
used freely in SQL queries. Single-line
comments are denoted by
- --. Everything after the dashes up to the end of the
+ "--". Everything after the dashes up to the end of the
line is ignored. Multiple-line comments, and comments within a line,
- are denoted by /* ... */
+ are denoted by "/* ... */".
diff --git a/doc/src/sgml/syntax.sgml b/doc/src/sgml/syntax.sgml
index 457e46f0357..7a7f75a875a 100644
--- a/doc/src/sgml/syntax.sgml
+++ b/doc/src/sgml/syntax.sgml
@@ -1,5 +1,5 @@
@@ -65,7 +65,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/syntax.sgml,v 1.19 2000/04/08 23:12:00 momj
Any string can be specified as an identifier if surrounded by
double quotes (like this!). Some care is required since
such an identifier will be case sensitive
- and will retain embedded whitespace other special characters.
+ and will retain embedded whitespace and most other special characters.
@@ -84,6 +84,7 @@ EXPLAIN EXTEND
LISTEN LOAD LOCK
MOVE
NEW NONE NOTIFY
+OFFSET
RESET
SETOF SHOW
UNLISTEN UNTIL
@@ -98,19 +99,27 @@ VACUUM VERBOSE
are allowed to be present as column labels, but not as identifiers:
-CASE COALESCE CROSS CURRENT CURRENT_USER CURRENT_SESSION
-DEC DECIMAL
-ELSE END
-FALSE FOREIGN
+ALL ANY ASC BETWEEN BIT BOTH
+CASE CAST CHAR CHARACTER CHECK COALESCE COLLATE COLUMN
+ CONSTRAINT CROSS CURRENT CURRENT_DATE CURRENT_TIME
+ CURRENT_TIMESTAMP CURRENT_USER
+DEC DECIMAL DEFAULT DESC DISTINCT
+ELSE END EXCEPT EXISTS EXTRACT
+FALSE FLOAT FOR FOREIGN FROM FULL
GLOBAL GROUP
-LOCAL
-NULLIF NUMERIC
-ORDER
-POSITION PRECISION
-SESSION_USER
-TABLE THEN TRANSACTION TRUE
-USER
-WHEN
+HAVING
+IN INNER INTERSECT INTO IS
+JOIN
+LEADING LEFT LIKE LOCAL
+NATURAL NCHAR NOT NULL NULLIF NUMERIC
+ON OR ORDER OUTER OVERLAPS
+POSITION PRECISION PRIMARY PUBLIC
+REFERENCES RIGHT
+SELECT SESSION_USER SOME SUBSTRING
+TABLE THEN TO TRANSACTION TRIM TRUE
+UNION UNIQUE USER
+VARCHAR
+WHEN WHERE
The following are Postgres
@@ -118,12 +127,9 @@ WHEN
or SQL3 reserved words:
-ADD ALL ALTER AND ANY AS ASC
-BEGIN BETWEEN BOTH BY
-CASCADE CAST CHAR CHARACTER CHECK CLOSE
- COLLATE COLUMN COMMIT CONSTRAINT CREATE
- CURRENT_DATE CURRENT_TIME CURRENT_TIMESTAMP
- CURSOR
+ADD ALTER AND AS
+BEGIN BY
+CASCADE CLOSE COMMIT CREATE CURSOR
DECLARE DEFAULT DELETE DESC DISTINCT DROP
EXECUTE EXISTS EXTRACT
FETCH FLOAT FOR FROM FULL
@@ -148,10 +154,10 @@ WHERE WITH WORK
The following are SQL92 reserved key words which
are not Postgres reserved key words, but which
if used as function names are always translated into the function
- length:
+ CHAR_LENGTH:
-CHAR_LENGTH CHARACTER_LENGTH
+CHARACTER_LENGTH
@@ -166,12 +172,28 @@ BOOLEAN DOUBLE FLOAT INT INTEGER INTERVAL REAL SMALLINT
+
+ The following are not keywords of any kind, but when used in the
+ context of a type name are translated into a native
+ Postgres type, and when used in the
+ context of a function name are translated into a native function:
+
+
+DATETIME TIMESPAN
+
+
+ (translated to TIMESTAMP and INTERVAL,
+ respectively). This feature is intended to help with
+ transitioning to v7.0, and will be removed in the next full
+ release (likely v7.1).
+
+
The following are either SQL92
or SQL3 reserved key words
which are not key words in Postgres.
These have no proscribed usage in Postgres
- at the time of writing (v6.5) but may become reserved key words in the
+ at the time of writing (v7.0) but may become reserved key words in the
future:
@@ -185,9 +207,10 @@ BOOLEAN DOUBLE FLOAT INT INTEGER INTERVAL REAL SMALLINT
ALLOCATE ARE ASSERTION AT AUTHORIZATION AVG
-BIT BIT_LENGTH
-CASCADED CATALOG COLLATION CONNECT CONNECTION
- CONTINUE CONVERT CORRESPONDING COUNT
+BIT_LENGTH
+CASCADED CATALOG CHAR_LENGTH CHARACTER_LENGTH COLLATION
+ CONNECT CONNECTION CONTINUE CONVERT CORRESPONDING COUNT
+ CURRENT_SESSION
DATE DEALLOCATE DEC DESCRIBE DESCRIPTOR
DIAGNOSTICS DISCONNECT DOMAIN
ESCAPE EXCEPT EXCEPTION EXEC EXTERNAL
@@ -231,20 +254,21 @@ WHENEVER WRITE
ACCESS AFTER AGGREGATE
BACKWARD BEFORE
-CACHE CREATEDB CREATEUSER CYCLE
+CACHE COMMENT CREATEDB CREATEUSER CYCLE
DATABASE DELIMITERS
EACH ENCODING EXCLUSIVE
-FORWARD FUNCTION
+FORCE FORWARD FUNCTION
HANDLER
INCREMENT INDEX INHERITS INSENSITIVE INSTEAD ISNULL
LANCOMPILER LOCATION
MAXVALUE MINVALUE MODE
-NOCREATEDB NOCREATEUSER NOTHING NOTNULL
+NOCREATEDB NOCREATEUSER NOTHING NOTIFY NOTNULL
OIDS OPERATOR
PASSWORD PROCEDURAL
-RECIPE RENAME RETURNS ROW RULE
+RECIPE REINDEX RENAME RETURNS ROW RULE
SEQUENCE SERIAL SHARE START STATEMENT STDIN STDOUT
-TRUSTED
+TEMP TRUSTED
+UNLISTEN UNTIL
VALID VERSION
diff --git a/doc/src/sgml/trigger.sgml b/doc/src/sgml/trigger.sgml
index e9e6d9bc448..1e76524baee 100644
--- a/doc/src/sgml/trigger.sgml
+++ b/doc/src/sgml/trigger.sgml
@@ -1,187 +1,332 @@
-
-Triggers
+
+ Triggers
-
-Postgres has various client interfaces
-such as Perl, Tcl, Python and C, as well as two
-Procedural Languages
-(PL). It is also possible
-to call C functions as trigger actions. Note that STATEMENT-level trigger
-events are not supported in the current version. You can currently specify
-BEFORE or AFTER on INSERT, DELETE or UPDATE of a tuple as a trigger event.
-
+
+ Postgres has various client interfaces
+ such as Perl, Tcl, Python and C, as well as three
+ Procedural Languages
+ (PL). It is also possible
+ to call C functions as trigger actions. Note that STATEMENT-level trigger
+ events are not supported in the current version. You can currently specify
+ BEFORE or AFTER on INSERT, DELETE or UPDATE of a tuple as a trigger event.
+
-
-Trigger Creation
+
+ Trigger Creation
-
- If a trigger event occurs, the trigger manager (called by the Executor)
-initializes the global structure TriggerData *CurrentTriggerData (described
-below) and calls the trigger function to handle the event.
-
+
+ If a trigger event occurs, the trigger manager (called by the Executor)
+ initializes the global structure TriggerData *CurrentTriggerData (described
+ below) and calls the trigger function to handle the event.
+
-
- The trigger function must be created before the trigger is created as a
-function taking no arguments and returns opaque.
-
+
+ The trigger function must be created before the trigger is created as a
+ function taking no arguments and returns opaque.
+
-
- The syntax for creating triggers is as follows:
+
+ The syntax for creating triggers is as follows:
-
- CREATE TRIGGER <trigger name> <BEFORE|AFTER> <INSERT|DELETE|UPDATE>
- ON <relation name> FOR EACH <ROW|STATEMENT>
- EXECUTE PROCEDURE <procedure name> (<function args>);
-
-
+
+CREATE TRIGGER trigger [ BEFORE | AFTER ] [ INSERT | DELETE | UPDATE [ OR ... ] ]
+ ON relation FOR EACH [ ROW | STATEMENT ]
+ EXECUTE PROCEDURE procedure
+ (args);
+
-
- The name of the trigger is used if you ever have to delete the trigger.
-It is used as an argument to the DROP TRIGGER command.
-
+ where the arguments are:
-
- The next word determines whether the function is called before or after
-the event.
-
+
+
+
+ trigger
+
+
+
+ The name of the trigger is
+ used if you ever have to delete the trigger.
+ It is used as an argument to the DROP TRIGGER command.
+
+
+
-
- The next element of the command determines on what event(s) will trigger
-the function. Multiple events can be specified separated by OR.
-
+
+ BEFORE
+ AFTER
+
+
+ Determines whether the function is called before or after
+ the event.
+
+
+
-
- The relation name determines which table the event applies to.
-
+
+ INSERT
+ DELETE
+ UPDATE
+
+
+ The next element of the command determines on what event(s) will trigger
+ the function. Multiple events can be specified separated by OR.
+
+
+
-
- The FOR EACH statement determines whether the trigger is fired for each
-affected row or before (or after) the entire statement has completed.
-
+
+ relation
+
+
+ The relation name determines which table the event applies to.
+
+
+
-
- The procedure name is the C function called.
-
+
+ ROW
+ STATEMENT
+
+
+ The FOR EACH clause determines whether the trigger is fired for each
+ affected row or before (or after) the entire statement has completed.
+
+
+
-
- The args are passed to the function in the CurrentTriggerData structure.
-The purpose of passing arguments to the function is to allow different
-triggers with similar requirements to call the same function.
-
+
+ procedure
+
+
+ The procedure name is the C function called.
+
+
+
-
- Also, function may be used for triggering different relations (these
-functions are named as "general trigger functions").
-
+
+ args
+
+
+ The arguments passed to the function in the CurrentTriggerData structure.
+ The purpose of passing arguments to the function is to allow different
+ triggers with similar requirements to call the same function.
+
-
- As example of using both features above, there could be a general
-function that takes as its arguments two field names and puts the current
-user in one and the current timestamp in the other. This allows triggers to
-be written on INSERT events to automatically track creation of records in a
-transaction table for example. It could also be used as a "last updated"
-function if used in an UPDATE event.
-
+
+ Also, procedure
+ may be used for triggering different relations (these
+ functions are named as "general trigger functions").
+
-
- Trigger functions return HeapTuple to the calling Executor. This
-is ignored for triggers fired after an INSERT, DELETE or UPDATE operation
-but it allows BEFORE triggers to:
+
+ As example of using both features above, there could be a general
+ function that takes as its arguments two field names and puts the current
+ user in one and the current timestamp in the other. This allows triggers to
+ be written on INSERT events to automatically track creation of records in a
+ transaction table for example. It could also be used as a "last updated"
+ function if used in an UPDATE event.
+
+
+
+
+
- - return NULL to skip the operation for the current tuple (and so the
- tuple will not be inserted/updated/deleted);
- - return a pointer to another tuple (INSERT and UPDATE only) which will
- be inserted (as the new version of the updated tuple if UPDATE) instead
- of original tuple.
-
+
+ Trigger functions return HeapTuple to the calling Executor. This
+ is ignored for triggers fired after an INSERT, DELETE or UPDATE operation
+ but it allows BEFORE triggers to:
-
- Note, that there is no initialization performed by the CREATE TRIGGER
-handler. This will be changed in the future. Also, if more than one trigger
-is defined for the same event on the same relation, the order of trigger
-firing is unpredictable. This may be changed in the future.
-
+
+
+
+ Return NULL to skip the operation for the current tuple (and so the
+ tuple will not be inserted/updated/deleted).
+
+
-
- If a trigger function executes SQL-queries (using SPI) then these queries
-may fire triggers again. This is known as cascading triggers. There is no
-explicit limitation on the number of cascade levels.
-
+
+
+ Return a pointer to another tuple (INSERT and UPDATE only) which will
+ be inserted (as the new version of the updated tuple if UPDATE) instead
+ of original tuple.
+
+
+
+
-
- If a trigger is fired by INSERT and inserts a new tuple in the same
-relation then this trigger will be fired again. Currently, there is nothing
-provided for synchronization (etc) of these cases but this may change. At
-the moment, there is function funny_dup17() in the regress tests which uses
-some techniques to stop recursion (cascading) on itself...
-
+
+ Note that there is no initialization performed by the CREATE TRIGGER
+ handler. This will be changed in the future. Also, if more than one trigger
+ is defined for the same event on the same relation, the order of trigger
+ firing is unpredictable. This may be changed in the future.
+
-
+
+ If a trigger function executes SQL-queries (using SPI) then these queries
+ may fire triggers again. This is known as cascading triggers. There is no
+ explicit limitation on the number of cascade levels.
+
-
-Interaction with the Trigger Manager
+
+ If a trigger is fired by INSERT and inserts a new tuple in the same
+ relation then this trigger will be fired again. Currently, there is nothing
+ provided for synchronization (etc) of these cases but this may change. At
+ the moment, there is function funny_dup17() in the regress tests which uses
+ some techniques to stop recursion (cascading) on itself...
+
+
-
- As mentioned above, when function is called by the trigger manager,
-structure TriggerData *CurrentTriggerData is NOT NULL and initialized. So
-it is better to check CurrentTriggerData against being NULL at the start
-and set it to NULL just after fetching the information to prevent calls to
-a trigger function not from the trigger manager.
-
+
+ Interaction with the Trigger Manager
-
- struct TriggerData is defined in src/include/commands/trigger.h:
+
+ As mentioned above, when function is called by the trigger manager,
+ structure TriggerData *CurrentTriggerData is NOT NULL and initialized. So
+ it is better to check CurrentTriggerData against being NULL at the start
+ and set it to NULL just after fetching the information to prevent calls to
+ a trigger function not from the trigger manager.
+
-
+
+ struct TriggerData is defined in src/include/commands/trigger.h:
+
+
typedef struct TriggerData
{
- TriggerEvent tg_event;
- Relation tg_relation;
- HeapTuple tg_trigtuple;
- HeapTuple tg_newtuple;
- Trigger *tg_trigger;
+ TriggerEvent tg_event;
+ Relation tg_relation;
+ HeapTuple tg_trigtuple;
+ HeapTuple tg_newtuple;
+ Trigger *tg_trigger;
} TriggerData;
-
+
-
-tg_event
- describes event for which the function is called. You may use the
- following macros to examine tg_event:
+ where the members are defined as follows:
- TRIGGER_FIRED_BEFORE(event) returns TRUE if trigger fired BEFORE;
- TRIGGER_FIRED_AFTER(event) returns TRUE if trigger fired AFTER;
- TRIGGER_FIRED_FOR_ROW(event) returns TRUE if trigger fired for
- ROW-level event;
- TRIGGER_FIRED_FOR_STATEMENT(event) returns TRUE if trigger fired for
- STATEMENT-level event;
- TRIGGER_FIRED_BY_INSERT(event) returns TRUE if trigger fired by INSERT;
- TRIGGER_FIRED_BY_DELETE(event) returns TRUE if trigger fired by DELETE;
- TRIGGER_FIRED_BY_UPDATE(event) returns TRUE if trigger fired by UPDATE.
+
+
+ tg_event
+
+
+ describes the event for which the function is called. You may use the
+ following macros to examine tg_event:
-tg_relation
- is pointer to structure describing the triggered relation. Look at
- src/include/utils/rel.h for details about this structure. The most
- interest things are tg_relation->rd_att (descriptor of the relation
- tuples) and tg_relation->rd_rel->relname (relation's name. This is not
- char*, but NameData. Use SPI_getrelname(tg_relation) to get char* if
- you need a copy of name).
+
+
+ TRIGGER_FIRED_BEFORE(tg_event)
+
+
+ returns TRUE if trigger fired BEFORE.
+
+
+
-tg_trigtuple
- is a pointer to the tuple for which the trigger is fired. This is the tuple
- being inserted (if INSERT), deleted (if DELETE) or updated (if UPDATE).
- If INSERT/DELETE then this is what you are to return to Executor if
- you don't want to replace tuple with another one (INSERT) or skip the
- operation.
+
+ TRIGGER_FIRED_AFTER(tg_event)
+
+
+ Returns TRUE if trigger fired AFTER.
+
+
+
-tg_newtuple
- is a pointer to the new version of tuple if UPDATE and NULL if this is
- for an INSERT or a DELETE. This is what you are to return to Executor if
- UPDATE and you don't want to replace this tuple with another one or skip
- the operation.
+
+ TRIGGER_FIRED_FOR_ROW(event)
+
+
+ Returns TRUE if trigger fired for
+ a ROW-level event.
+
+
+
-tg_trigger
- is pointer to structure Trigger defined in src/include/utils/rel.h:
+
+ TRIGGER_FIRED_FOR_STATEMENT(event)
+
+
+ Returns TRUE if trigger fired for
+ STATEMENT-level event.
+
+
+
+
+ TRIGGER_FIRED_BY_INSERT(event)
+
+
+ Returns TRUE if trigger fired by INSERT.
+
+
+
+
+
+ TRIGGER_FIRED_BY_DELETE(event)
+
+
+ Returns TRUE if trigger fired by DELETE.
+
+
+
+
+
+ TRIGGER_FIRED_BY_UPDATE(event)
+
+
+ Returns TRUE if trigger fired by UPDATE.
+
+
+
+
+
+
+
+
+
+ tg_relation
+
+
+ is a pointer to structure describing the triggered relation. Look at
+ src/include/utils/rel.h for details about this structure. The most
+ interest things are tg_relation->rd_att (descriptor of the relation
+ tuples) and tg_relation->rd_rel->relname (relation's name. This is not
+ char*, but NameData. Use SPI_getrelname(tg_relation) to get char* if
+ you need a copy of name).
+
+
+
+
+
+ tg_trigtuple
+
+
+ is a pointer to the tuple for which the trigger is fired. This is the tuple
+ being inserted (if INSERT), deleted (if DELETE) or updated (if UPDATE).
+ If INSERT/DELETE then this is what you are to return to Executor if
+ you don't want to replace tuple with another one (INSERT) or skip the
+ operation.
+
+
+
+
+
+ tg_newtuple
+
+
+ is a pointer to the new version of tuple if UPDATE and NULL if this is
+ for an INSERT or a DELETE. This is what you are to return to Executor if
+ UPDATE and you don't want to replace this tuple with another one or skip
+ the operation.
+
+
+
+
+
+ tg_trigger
+
+
+ is pointer to structure Trigger defined in src/include/utils/rel.h:
+
+
typedef struct Trigger
{
Oid tgoid;
@@ -197,64 +342,72 @@ typedef struct Trigger
int16 tgattr[FUNC_MAX_ARGS];
char **tgargs;
} Trigger;
+
- tgname is the trigger's name, tgnargs is number of arguments in tgargs,
- tgargs is an array of pointers to the arguments specified in the CREATE
- TRIGGER statement. Other members are for internal use only.
-
-
-
+ where
+ tgname is the trigger's name, tgnargs is number of arguments in tgargs,
+ tgargs is an array of pointers to the arguments specified in the CREATE
+ TRIGGER statement. Other members are for internal use only.
+
+
+
+
+
+
-
-Visibility of Data Changes
+
+ Visibility of Data Changes
-
- Postgres data changes visibility rule: during a query execution, data
-changes made by the query itself (via SQL-function, SPI-function, triggers)
-are invisible to the query scan. For example, in query
+
+ Postgres data changes visibility rule: during a query execution, data
+ changes made by the query itself (via SQL-function, SPI-function, triggers)
+ are invisible to the query scan. For example, in query
-
- INSERT INTO a SELECT * FROM a
-
+
+INSERT INTO a SELECT * FROM a;
+
- tuples inserted are invisible for SELECT' scan. In effect, this
-duplicates the database table within itself (subject to unique index
-rules, of course) without recursing.
-
+ tuples inserted are invisible for SELECT scan. In effect, this
+ duplicates the database table within itself (subject to unique index
+ rules, of course) without recursing.
+
-
- But keep in mind this notice about visibility in the SPI documentation:
+
+ But keep in mind this notice about visibility in the SPI documentation:
-
- Changes made by query Q are visible by queries which are started after
- query Q, no matter whether they are started inside Q (during the
- execution of Q) or after Q is done.
-
-
+
+
+Changes made by query Q are visible by queries which are started after
+query Q, no matter whether they are started inside Q (during the
+execution of Q) or after Q is done.
+
+