From ed0b3f2d99b8b8d20dd777bf58a974a9a095a2c7 Mon Sep 17 00:00:00 2001
From: Bruce Momjian
-Last updated: Fri Jun 2 11:32:13 EDT 2000
-
-Current maintainer: Bruce Momjian (pgman@candle.pha.pa.us)
-
-The most recent version of this document can be viewed at the postgreSQL
-Web site, http://www.PostgreSQL.org.
-
-Linux-specific questions are answered in http://www.PostgreSQL.org/docs/faq-linux.html.
-
-HPUX-specific questions are answered in http://www.PostgreSQL.org/docs/faq-hpux.html.
-
-Solaris-specific questions are answered in http://www.postgresql.org/docs/faq-solaris.html.
-
-Irix-specific questions are answered in http://www.PostgreSQL.org/docs/faq-irix.html.
-
-
-
-
-
-PostgreSQL is an enhancement of the POSTGRES database management system,
-a next-generation DBMS research prototype. While PostgreSQL retains the
-powerful data model and rich data types of POSTGRES, it replaces the
-PostQuel query language with an extended subset of SQL. PostgreSQL is
-free and the complete source is available.
-
-PostgreSQL development is being performed by a team of Internet
-developers who all subscribe to the PostgreSQL development mailing list.
-The current coordinator is Marc G. Fournier (scrappy@postgreSQL.org). (See
-below on how to join). This team is now responsible for all current and
-future development of PostgreSQL.
-
-The authors of PostgreSQL 1.01 were Andrew Yu and Jolly Chen. Many
-others have contributed to the porting, testing, debugging and
-enhancement of the code. The original Postgres code, from which
-PostgreSQL is derived, was the effort of many graduate students,
-undergraduate students, and staff programmers working under the
-direction of Professor Michael Stonebraker at the University of
-California, Berkeley.
-
-The original name of the software at Berkeley was Postgres. When SQL
-functionality was added in 1995, its name was changed to Postgres95. The
-name was changed at the end of 1996 to PostgreSQL.
-
-It is pronounced Post-Gres-Q-L.
-
-
-
-PostgreSQL is subject to the following COPYRIGHT.
-
-PostgreSQL Data Base Management System
-
-Portions copyright (c) 1996-2000, PostgreSQL, Inc
-
-Portions Copyright (c) 1994-6 Regents of the University of California
-
-Permission to use, copy, modify, and distribute this software and its
-documentation for any purpose, without fee, and without a written
-agreement is hereby granted, provided that the above copyright notice
-and this paragraph and the following two paragraphs appear in all
-copies.
-
-IN NO EVENT SHALL THE UNIVERSITY OF CALIFORNIA BE LIABLE TO ANY PARTY
-FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES,
-INCLUDING LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS
-DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED OF
-THE POSSIBILITY OF SUCH DAMAGE.
-
-THE UNIVERSITY OF CALIFORNIA SPECIFICALLY DISCLAIMS ANY WARRANTIES,
-INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
-AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER
-IS ON AN "AS IS" BASIS, AND THE UNIVERSITY OF CALIFORNIA HAS NO
-OBLIGATIONS TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR
-MODIFICATIONS.
-
-
-
-
-
-The authors have compiled and tested PostgreSQL on the following
-platforms (some of these compiles require gcc):
-
-
-
-
-It is possible to compile the libpq C library, psql, and other
-interfaces and binaries to run on MS Windows platforms. In this case,
-the client is running on MS Windows, and communicates via TCP/IP to a
-server running on one of our supported Unix platforms.
-
-A file win31.mak is included in the distribution for making a
-Win32 libpq library and psql.
-
-The database server is now working on Windows NT using the Cygnus
-Unix/NT porting library. See pgsql/doc/README.NT in the distribution.
-There is also a web page at
-http://www.freebsd.org/~kevlo/postgres/portNT.html.
-
-There is another port using U/Win at http://surya.wipro.com/uwin/ported.html.
-
-
-
-The primary anonymous ftp site for PostgreSQL is
-ftp://ftp.postgreSQL.org/pub
-
-For mirror sites, see our main web site.
-
-
-
-There is no official support for PostgreSQL from the University of
-California, Berkeley. It is maintained through volunteer effort.
-
-The main mailing list is: pgsql-general@postgreSQL.org.
-It is available for discussion of matters pertaining to PostgreSQL.
-To subscribe, send a mail with the lines in the body (not
-the subject line)
-
-
-
-to pgsql-general-request@postgreSQL.org.
-
-There is also a digest list available. To subscribe to this list, send
-email to:
-pgsql-general-digest-request@postgreSQL.org with a BODY of:
-
-
-
-The bugs mailing list is available. To subscribe to this list, send email
-to bugs-request@postgreSQL.org
-with a BODY of:
-
-
-
-
-
-Additional mailing lists and information about PostgreSQL can be found
-via the PostgreSQL WWW home page at:
-
-
-
-There is also an IRC channel on EFNet, channel #PostgreSQL.
-I use the unix command
-
-Commercial support for PostgreSQL is available at http://www.pgsql.com/
-
-
-
-
-The latest release of PostgreSQL is version 7.0.2.
-
-We plan to have major releases every four months.
-
-
-
-
-Several manuals, manual pages, and some small test examples are
-included in the distribution. See the /doc directory. You can also
-browse the manual on-line at
-http://www.postgresql.org/docs/postgres.
-in the distribution.
-
-
-There is a PostgreSQL book availiable at
-http://www.postgresql.org/docs/awbook.html
-
-psql has some nice \d commands to show information about types,
-operators, functions, aggregates, etc.
-
-The web site contains even more documentation.
-
-
-
-PostgreSQL supports an extended subset of SQL-92. See our
-
-TODO for a list of known bugs, missing features, and future plans.
-
-
-
-The PostgreSQL book at
-http://www.postgresql.org/docs/awbook.html teaches SQL.
-
-There is a nice tutorial at
-http://w3.one.net/~jhoffman/sqltut.htm and at
-http://ourworld.compuserve.com/homepages/graeme_birchall/HTM_COOK.HTM.
-
-Another one is "Teach Yourself SQL in 21 Days, Second Edition" at
-http://members.tripod.com/er4ebus/sql/index.htm
-
-Many of our users like The Practical SQL Handbook, Bowman et al.,
-Addison Wesley. Others like The Complete Reference SQL, Groff et al.,
-McGraw-Hill.
-
-
-
-
-Yes, we easily handle dates past the year 2000AD, and before 2000BC.
-
-
-
-
-First, download the latest sources and read the PostgreSQL Developers
-documentation on our web site, or in the distribution.
-Second, subscribe to the pgsql-hackers and pgsql-patches mailing lists.
-Third, submit high-quality patches to pgsql-patches.
-
-There are about a dozen people who have COMMIT privileges to
-the PostgreSQL CVS archive. All of them have submitted so many
-high-quality patches that it was a pain for the existing
-committers to keep up, and we had confidence that patches they
-committed were likely to be of high quality.
-
-
-
-Fill out the "bug-template" file and send it to: bugs@postgreSQL.org
-
-Also check out our ftp site ftp://ftp.postgreSQL.org/pub to
-see if there is a more recent PostgreSQL version or patches.
-
-
-
-
-There are several ways of measuring software: features, performance,
-reliability, support, and price.
-
-
-
-
-
-
-
-There are two ODBC drivers available, PsqlODBC and OpenLink ODBC.
-
-PsqlODBC is included in the distribution. More information about it can
-be gotten from:
-ftp://ftp.postgresql.org/pub/odbc/index.html
-
-OpenLink ODBC can be gotten from
-http://www.openlinksw.com. It works with their standard ODBC client
-software so you'll have PostgreSQL ODBC available on every client
-platform they support (Win, Mac, Unix, VMS).
-
-They will probably be selling this product to people who need
-commercial-quality support, but a freeware version will always be
-available. Questions to postgres95@openlink.co.uk.
-
-See also the
-ODBC chapter of the Programmer's Guide.
-
-
-
-
-A nice introduction to Database-backed Web pages can be seen at: http://www.webtools.com
-
-There is also one at
-http://www.phone.net/home/mwm/hotlist/.
-
-For web integration, PHP is an excellent interface. It is at:
-http://www.php.net
-
-PHP is great for simple stuff, but for more complex cases, many
-use the perl interface and CGI.pm.
-
-A WWW gateway based on WDB using perl can be downloaded from http://www.eol.ists.ca/~dunlop/wdb-p95
-
-
-
-We have a nice graphical user interface called pgaccess, which is
-shipped as part of the distribution. Pgaccess also has a report
-generator. The web page is http://www.flex.ro/pgaccess
-
-We also include ecpg, which is an embedded SQL query language interface for
-C.
-
-
-
-We have:
-
-
-
-
-
-
-
-
-
-
-
-
-The simplest way is to specify the --prefix option when running configure.
-If you forgot to do that, you can edit Makefile.global and change POSTGRESDIR
-accordingly, or create a Makefile.custom and define POSTGRESDIR there.
-
-
-
-
-It could be a variety of problems, but first check to see that you
-have system V extensions installed in your kernel. PostgreSQL requires
-kernel support for shared memory and semaphores.
-
-
-
-
-You either do not have shared memory configured properly in kernel or
-you need to enlarge the shared memory available in the kernel. The
-exact amount you need depends on your architecture and how many buffers
-and backend processes you configure postmaster to run with.
-For most systems, with default numbers of buffers and processes, you
-need a minimum of ~1MB.
-
-
-
-If the error message is IpcSemaphoreCreate: semget failed (No space
-left on device) then your kernel is not configured with enough
-semaphores. Postgres needs one semaphore per potential backend process.
-A temporary solution is to start the postmaster with a smaller limit on
-the number of backend processes. Use -N with a parameter less
-than the default of 32. A more permanent solution is to increase your
-kernel's SEMMNS and SEMMNI parameters.
-
-If the error message is something else, you might not have semaphore
-support configured in your kernel at all.
-
-
-
-
-By default, PostgreSQL only allows connections from the local machine
-using Unix domain sockets. Other machines will not be able to connect
-unless you add the -i flag to the postmaster,
-and enable host-based authentication by modifying the file
-$PGDATA/pg_hba.conf accordingly. This will allow TCP/IP connections.
-
-
-
-
-The default configuration allows only unix domain socket connections
-from the local machine. To enable TCP/IP connections, make sure the
-postmaster has been started with the -i option, and add an
-appropriate host entry to the file
-pgsql/data/pg_hba.conf. See the pg_hba.conf manual page.
-
-
-
-
-You should not create database users with user id 0 (root). They will be
-unable to access the database. This is a security precaution because
-of the ability of any user to dynamically link object modules into the
-database engine.
-
-
-
-
-This problem can be caused by a kernel that is not configured to support
-semaphores.
-
-
-
-
-Certainly, indices can speed up queries. The EXPLAIN command
-allows you to see how PostgreSQL is interpreting your query, and which
-indices are being used.
-
-If you are doing a lot of INSERTs, consider doing them in a large
-batch using the COPY command. This is much faster than single
-individual INSERTS. Second, statements not in a BEGIN
-WORK/COMMIT transaction block are considered to be in their
-own transaction. Consider performing several statements in a single
-transaction block. This reduces the transaction overhead. Also
-consider dropping and recreating indices when making large data
-changes.
-
-There are several tuning things that can be done. You can disable
-fsync() by starting the postmaster with a -o -F option. This will
-prevent fsync()'s from flushing to disk after every transaction.
-
-You can also use the postmaster -B option to increase the number of
-shared memory buffers used by the backend processes. If you make this
-parameter too high, the postmaster may not start up because you've exceeded
-your kernel's limit on shared memory space.
-Each buffer is 8K and the default is 64 buffers.
-
-You can also use the backend -S option to increase the maximum amount
-of memory used by the backend process for temporary sorts. The -S value
-is measured in kilobytes, and the default is 512 (ie, 512K).
-
-You can also use the CLUSTER command to group data in base tables to
-match an index. See the cluster(l) manual page for more details.
-
-
-
-
-PostgreSQL has several features that report status information that can
-be valuable for debugging purposes.
-
-First, by running configure with the --enable-cassert option, many
-assert()'s monitor the progress of the backend and halt the program when
-something unexpected occurs.
-
-Both postmaster and postgres have several debug options available.
-First, whenever you start the postmaster, make sure you send the
-standard output and error to a log file, like:
-
-
-This will put a server.log file in the top-level PostgreSQL directory.
-This file contains useful information about problems or errors
-encountered by the server. Postmaster has a -d option that allows even
-more detailed information to be reported. The -d option takes a number
-that specifies the debug level. Be warned that high debug level values
-generate large log files.
-
-If the postmaster is not running, you can actually run the
-postgres backend from the command line, and type your SQL statement
-directly. This is recommended only for debugging purposes. Note
-that a newline terminates the query, not a semicolon. If you have
-compiled with debugging symbols, you can use a debugger to see what is
-happening. Because the backend was not started from the postmaster, it
-is not running in an identical environment and locking/backend
-interaction problems may not be duplicated.
-
-If the postmaster is running, start psql in one window,
-then find the PID of the postgres process used by
-psql. Use a debugger to attach to the postgres
-PID. You can set breakpoints in the debugger and issue
-queries from psql. If you are debugging postgres startup,
-you can set PGOPTIONS="-W n", then start psql. This will cause
-startup to delay for n seconds so you can attach with the
-debugger and trace through the startup sequence.
-
-The postgres program has -s, -A, and -t options that can be very useful
-for debugging and performance measurements.
-
-You can also compile with profiling to see what functions are taking
-execution time. The backend profile files will be deposited in the
-pgsql/data/base/dbname directory. The client profile file will be put
-in the client's current directory.
-
-
-
-
-You need to increase the postmaster's limit on how many concurrent backend
-processes it can start.
-
-In Postgres 6.5 and up, the default limit is 32 processes. You can
-increase it by restarting the postmaster with a suitable -N
-value. With the default configuration you can set -N as large as
-1024; if you need more, increase MAXBACKENDS in
-include/config.h and rebuild. You can set the default value of
--N at configuration time, if you like, using configure's
---with-maxbackends switch.
-
-Note that if you make -N larger than 32, you must also increase
--B beyond its default of 64; -B must be at least twice -N, and
-probably should be more than that for best performance. For large
-numbers of backend processes, you are also likely to find that you need
-to increase various Unix kernel configuration parameters. Things to
-check include the maximum size of shared memory blocks,
-SHMMAX, the maximum number of semaphores,
-SEMMNS and SEMMNI, the maximum number of
-processes, NPROC, the maximum number of processes per
-user, MAXUPRC, and the maximum number of open files,
-NFILE and NINODE. The reason that Postgres
-has a limit on the number of allowed backend processes is so that you
-can ensure that your system won't run out of resources.
-
-In Postgres versions prior to 6.5, the maximum number of backends was
-64, and changing it required a rebuild after altering the MaxBackendId
-constant in include/storage/sinvaladt.h.
-
-
-
-They are temporary files generated by the query executor. For
-example, if a sort needs to be done to satisfy an ORDER BY, and
-the sort requires more space than the backend's -S parameter allows,
-then temp files are created to hold the extra data.
-
-The temp files should go away automatically, but might not if a backend
-crashes during a sort. If you have no transactions running at the time,
-it is safe to delete the pg_tempNNN.NN files.
-
-
-
-
-
-
-Check your locale configuration. PostgreSQL uses the locale settings of
-the user that ran the postmaster process. There are postgres and psql
-SET commands to control the date format. Set those accordingly for
-your operating environment.
-
-
-
-
-See the DECLARE manual page for a description.
-
-
-
-See the FETCH manual page, or use SELECT ... LIMIT....
-
-The entire query may have to be evaluated, even if you only want the
-first few rows. Consider a query that has an ORDER BY.
-If there is an index that matches the ORDER BY,
-PostgreSQL may be able to evaluate only the first few records requested,
-or the entire query may have to be evaluated until the desired rows have
-been generated.
-
-
-
-You can read the source code for psql, file
-pgsql/src/bin/psql/psql.c. It contains SQL commands that generate the
-output for psql's backslash commands. You can also start psql
-with the -E option so that it will print out the queries it uses
-to execute the commands you give.
-
-
-
-
-We do not support ALTER TABLE DROP COLUMN, but do
-this:
-
-
-
-
-
-
-
-These are the limits:
-
-
-
-To change the maximum row size, edit include/config.h and change
-BLCKSZ. To use attributes larger than 8K, you can also
-use the large object interface.
-
-Row length limit will be removed in 7.1.
-
-
-
-
-A Postgres database can require about six and a half times the disk space
-required to store the data in a flat file.
-
-Consider a file of 300,000 lines with two integers on each line. The
-flat file is 2.4MB. The size of the PostgreSQL database file containing
-this data can be estimated at 14MB:
-
-
-Frequently Asked Questions (FAQ) for PostgreSQL
-
-
-
-1.1) What is PostgreSQL?
-1.2) What's the copyright on PostgreSQL?
-1.3) What Unix platforms does PostgreSQL run on?
-1.4) What non-unix ports are available?
-1.5) Where can I get PostgreSQL?
-1.6) Where can I get support for PostgreSQL?
-1.7) What is the latest release of PostgreSQL?
-1.8) What documentation is available for PostgreSQL?
-1.9) How do I find out about known bugs or missing features?
-1.10) How can I learn SQL?
-1.11) Is PostgreSQL Y2K compliant?
-1.12) How do I join the development team?
-1.13) How do I submit a bug report?
-1.14) How does PostgreSQL compare to other DBMS's?
-
-
-
-
-2.1) Are there ODBC drivers for
-PostgreSQL?
-2.2) What tools are available for hooking
-PostgreSQL to Web pages?
-2.3) Does PostgreSQL have a graphical user interface?
-A report generator? An embedded query language interface?
-2.4) What languages are available to communicate
-with PostgreSQL?
-
-
-
-
-3.1) Why does initdb fail?
-3.2) How do I install PostgreSQL somewhere other than
-/usr/local/pgsql?
-3.3) When I start the postmaster, I get a
-Bad System Call or core dumped message. Why?
-3.4) When I try to start the postmaster, I get
-IpcMemoryCreate errors3. Why?
-3.5) When I try to start the postmaster, I get
-IpcSemaphoreCreate errors. Why?
-3.6) How do I prevent other hosts from accessing my
-PostgreSQL database?
-3.7) Why can't I connect to my database from
-another machine?
-3.8) Why can't I access the database as the
-root user?
-3.9) All my servers crash under concurrent
-table access. Why?
-3.10) How do I tune the database engine for
-better performance?
-3.11) What debugging features are available in
-PostgreSQL?
-3.12) I get 'Sorry, too many clients' when trying to
-connect. Why?
-3.13) What are the pg_psort.XXX files in my
-database directory?
-
-
-
-4.1) The system seems to be confused about commas,
-decimal points, and date formats.
-4.2) What is the exact difference between
-binary cursors and normal cursors?
-4.3) How do I select only the first few rows of
-a query?
-
-4.4) How do I get a list of tables, or other
-things I can see in psql?
-4.5) How do you remove a column from a table?
-
-4.6) What is the maximum size for a
-row, table, database?
-4.7) How much database disk space is required
-to store data from a typical flat file?
-
-4.8) How do I find out what indices or
-operations are defined in the database?
-4.9) My queries are slow or don't make use of the
-indexes. Why?
-4.10) How do I see how the query optimizer is
-evaluating my query?
-4.11) What is an R-tree index?
-4.12) What is Genetic Query Optimization?
-
-4.13) How do I do regular expression searches
-and case-insensitive regexp searching?
-4.14) In a query, how do I detect if a field
-is NULL?
-4.15) What is the difference between the
-various character types?
-4.16.1) How do I create a serial/auto-incrementing field?
-4.16.2) How do I get the value of a serial insert?
-4.16.3) Don't currval() and nextval() lead to a
-race condition with other concurrent backend processes?
-
-4.17) What is an oid? What is a tid?
-4.18) What is the meaning of some of the terms
-used in PostgreSQL?
-
-4.19) Why do I get the error "FATAL: palloc
-failure: memory exhausted?"
-4.20) How do I tell what PostgreSQL version I
-am running?
-4.21) My large-object operations get invalid
-large obj descriptor. Why?
-4.22) How do I create a column that will default to the
-current time?
-4.23) Why are my subqueries using IN
so
-slow?
-4.24) How do I do an outer join?
-
-
-
-5.1) I wrote a user-defined function. When I run
-it in psql, why does it dump core?
-5.2) What does the message:
-NOTICE:PortalHeapMemoryFree: 0x402251d0 not in alloc set! mean?
-5.3) How can I contribute some nifty new types and functions
-for PostgreSQL?
-5.4) How do I write a C function to return a
-tuple?
-5.5) I have changed a source file. Why does the
-recompile does not see the change?
-
-
-
-
-
-1.1) What is PostgreSQL?
1.2) What's the copyright on
-PostgreSQL?
1.3) What Unix platforms does PostgreSQL run
-on?
-
-1.4) What non-unix ports are available?
1.5) Where can I get PostgreSQL?
1.6) Where can I get support for PostgreSQL?
- subscribe
- end
-
- subscribe
- end
-
-
-Digests are sent out to members of this list whenever the main list has
-received around 30k of messages.
- subscribe
- end
-
-
-There is also a developers discussion mailing list available. To
-subscribe to this list, send email to hackers-request@postgreSQL.org
-with a BODY of:
- subscribe
- end
-
-http://postgreSQL.org
-
irc -c '#PostgreSQL' "$USER"
-irc.phoenix.net
1.7) What is the latest release of PostgreSQL?
1.8) What documentation is available for PostgreSQL?
1.9) How do I find out about known bugs or missing features?
-
1.10) How can I learn SQL?
1.11) Is PostgreSQL Y2K compliant?
1.12) How do I join the development team?
1.13) How do I submit a bug report?
1.14) How does PostgreSQL compare to other
-DBMS's?
-
-
-
-
-
-
-In comparison to MySQL or leaner database systems, we are slower on
-inserts/updates because we have transaction overhead. Of course, MySQL
-doesn't have any of the features mentioned in the Features
-section above. We are built for flexibility and features, though we
-continue to improve performance through profiling and source code
-analysis. There is an interesting web page comparing PostgreSQL to MySQL
-at
-http://openacs.org/why-not-mysql.html
-
-We handle each user connection by creating a Unix process. Backend
-processes share data buffers and locking information. With multiple
-CPU's, multiple backends can easily run on different CPU's.
-
-
-
-
-
-
-
-
-
-2.1) Are there ODBC drivers for PostgreSQL?
2.2) What tools are available for hooking
-PostgreSQL to Web pages?
2.3) Does PostgreSQL have a graphical user interface?
-A report generator? An embedded query language interface?
2.4) What languages are available to
-communicate with PostgreSQL?
-
-
-3.1) Why does initdb fail?
-
3.2) How do I install PostgreSQL somewhere
-other than /usr/local/pgsql?
3.3) When I start the postmaster, I get a Bad
-System Call or core dumped message. Why?
3.4) When I try to start the postmaster, I
-get IpcMemoryCreate errors. Why?
3.5) When I try to start the postmaster, I
-get IpcSemaphoreCreate errors. Why?
3.6) How do I prevent other hosts from
-accessing my PostgreSQL database?
3.7) Why can't I connect to my database from
-another machine?
3.8) Why can't I access the database as the root
-user?
3.9) All my servers crash under concurrent
-table access. Why?
3.10) How do I tune the database engine for
-better performance?
3.11) What debugging features are available in
-PostgreSQL?
- cd /usr/local/pgsql
- ./bin/postmaster >server.log 2>&1 &
-
3.12) I get 'Sorry, too many clients' when trying
-to connect. Why?
3.13) What are the pg_tempNNN.NN files in my
-database directory?
-
-4.1) The system seems to be confused about
-commas, decimal points, and date formats.
4.2) What is the exact difference between
-binary cursors and normal cursors?
4.3) How do I SELECT only the first few
-rows of a query?
4.4) How do I get a list of tables, or other
-information I see in psql?
4.5) How do you remove a column from a
-table?
- SELECT ... -- select all columns but the one you want to remove
- INTO TABLE new_table
- FROM old_table;
- DROP TABLE old_table;
- ALTER TABLE new_table RENAME TO old_table;
-
4.6) What is the maximum size for a
-row, table, database?
-Maximum size for a database? unlimited (60GB databases exist)
-Maximum size for a table? unlimited on all operating systems
-Maximum size for a row? 8k, configurable to 32k
-Maximum number of rows in a table? unlimited
-Maximum number of columns table? unlimited
-Maximum number of indexes on a table? unlimited
-
-
-Of course, these are not actually unlimited, but limited to available
-disk space.4.7)How much database disk space is required to
-store data from a typical flat file?
- 36 bytes: each row header (approximate)
- + 8 bytes: two int fields @ 4 bytes each
- + 4 bytes: pointer on page to tuple
- ----------------------------------------
- 48 bytes per row
-
- The data page size in PostgreSQL is 8192 bytes (8 KB), so:
-
- 8192 bytes per page
- ------------------- = 171 rows per database page (rounded up)
- 48 bytes per row
-
- 300000 data rows
- -------------------- = 1755 database pages
- 171 rows per page
-
-1755 database pages * 8192 bytes per page = 14,376,960 bytes (14MB)
-
- -
- -psql has a variety of backslash commands to show such information. Use -\? to see them.
- -Also try the file pgsql/src/tutorial/syscat.source. It -illustrates many of the SELECTs needed to get information from -the database system tables.
- - -
- -PostgreSQL does not automatically maintain statistics. One has to make -an explicit VACUUM call to update the statistics. After -statistics are updated, the optimizer knows how many rows in the table, -and can better decide if it should use indices. Note that the optimizer -does not use indices in cases when the table is small because a -sequential scan would be faster.
- -For column-specific optimization statistics, use VACUUM -ANALYZE. VACUUM ANALYZE is important for complex -multi-join queries, so the optimizer can estimate the number of rows -returned from each table, and choose the proper join order. The backend -does not keep track of column statistics on its own, so VACUUM -ANALYZE must be run to collect them periodically.
- -Indexes are usually not used for ORDER BY operations: a -sequential scan followed by an explicit sort is faster than an indexscan -of all tuples of a large table, because it takes fewer disk accesses. -
- -When using wild-card operators such as LIKE or ~, indices can -only be used if the beginning of the search is anchored to the start of -the string. So, to use indices, LIKE searches should not -begin with %, and ~(regular expression searches) should -start with ^. - -
- -See the EXPLAIN manual page.
- -
- -An r-tree index is used for indexing spatial data. A hash index can't -handle range searches. A B-tree index only handles range searches in a -single dimension. R-tree's can handle multi-dimensional data. For -example, if an R-tree index can be built on an attribute of type point, -the system can more efficient answer queries like select all points -within a bounding rectangle.
- -The canonical paper that describes the original R-Tree design is:
- -Guttman, A. "R-Trees: A Dynamic Index Structure for Spatial Searching." -Proc of the 1984 ACM SIGMOD Int'l Conf on Mgmt of Data, 45-57.
- -You can also find this paper in Stonebraker's "Readings in Database -Systems"
- -Builtin R-Trees can handle polygons and boxes. In theory, R-trees can -be extended to handle higher number of dimensions. In practice, -extending R-trees require a bit of work and we don't currently have any -documentation on how to do it.
- - -
- -The GEQO module in PostgreSQL is intended to solve the query -optimization problem of joining many tables by means of a Genetic -Algorithm (GA). It allows the handling of large join queries through -non-exhaustive search.
- -For further information see the documentation. - - - -
- -The ~ operator does regular-expression matching, and ~* -does case-insensitive regular-expression matching. There is no -case-insensitive variant of the LIKE operator, but you can get the -effect of case-insensitive LIKE with this: -
- WHERE lower(textfield) LIKE lower(pattern) -- -
- -You test the column with IS NULL and IS NOT NULL.
- - -
-Type Internal Name Notes --------------------------------------------------- -"char" char 1 character -CHAR(#) bpchar blank padded to the specified fixed length -VARCHAR(#) varchar size specifies maximum length, no padding -TEXT text length limited only by maximum row length -BYTEA bytea variable-length array of bytes -
- -You will see the internal name when examining system catalogs -and in some error messages.
- -The last four types above are "varlena" types (i.e. the first four bytes -are the length, followed by the data). char(#) allocates the -maximum number of bytes no matter how much data is stored in the field. -text, varchar(#), and bytea all have variable length on the disk, -and because of this, there is a small performance penalty for using -them. Specifically, the penalty is for access to all columns after the -first column of this type.
- - -
- -PostgreSQL supports SERIAL data type. It auto-creates a -sequence and index on the column. For example, this... -
- CREATE TABLE person ( - id SERIAL, - name TEXT - ); --...is automatically translated into this... -
- CREATE SEQUENCE person_id_seq; - CREATE TABLE person ( - id INT4 NOT NULL DEFAULT nextval('person_id_seq'), - name TEXT - ); - CREATE UNIQUE INDEX person_id_key ON person ( id ); --See the create_sequence manual page for more information about sequences. - -You can also use each row's oid field as a unique value. However, if -you need to dump and reload the database, you need to use pg_dump's -o -option or COPY WITH OIDS option to preserve the oids.
- -For more details, see Bruce Momjian's chapter on -Numbering Rows. - -
-Probably the simplest approach is to to retrieve the next SERIAL value from the sequence object with the nextval() function before inserting and then insert it explicitly. Using the example table in 4.16.1, that might look like this: -
- $newSerialID = nextval('person_id_seq'); - INSERT INTO person (id, name) VALUES ($newSerialID, 'Blaise Pascal'); --You would then also have the new value stored in
$newSerialID
for use in other queries (e.g., as a foreign key to the person
table). Note that the name of the automatically-created SEQUENCE object will be named <table>_<serialcolumn>_seq, where table and serialcolumn are the names of your table and your SERIAL column, respectively.
--Similarly, you could retrieve the just-assigned SERIAL value with the currval() function after it was inserted by default, e.g., -
- INSERT INTO person (name) VALUES ('Blaise Pascal'); - $newID = currval('person_id_seq'); --Finally, you could use the oid returned from the -INSERT statement to lookup the default value, though this is probably -the least portable approach. In perl, using DBI with Edmund Mergl's -DBD::Pg module, the oid value is made available via -$sth->{pg_oid_status} after $sth->execute(). - -
- -No. That has been handled by the backends. - - -
- -Oids are PostgreSQL's answer to unique row ids. Every row that is -created in PostgreSQL gets a unique oid. All oids generated during -initdb are less than 16384 (from backend/access/transam.h). All -user-created oids are equal or greater that this. By default, all these -oids are unique not only within a table, or database, but unique within -the entire PostgreSQL installation.
- -PostgreSQL uses oids in its internal system tables to link rows between -tables. These oids can be used to identify specific user rows and used -in joins. It is recommended you use column type oid to store oid -values. See the sql(l) manual page to see the other internal columns. -You can create an index on the oid field for faster access.
- -Oids are assigned to all new rows from a central area that is used by -all databases. If you want to change the oid to something else, or if -you want to make a copy of the table, with the original oid's, there is -no reason you can't do it: - -
- CREATE TABLE new_table(old_oid oid, mycol int); - SELECT INTO new SELECT old_oid, mycol FROM old; - COPY new TO '/tmp/pgtable'; - DELETE FROM new; - COPY new WITH OIDS FROM '/tmp/pgtable'; - -
- -Tids are used to identify specific physical rows with block and offset -values. Tids change after rows are modified or reloaded. They are used -by index entries to point to physical rows.
- - -
- -Some of the source code and older documentation use terms that have more -common usage. Here are some: - -
- -
- -It is possible you have run out of virtual memory on your system, or -your kernel has a low limit for certain resources. Try this before -starting the postmaster: - -
- ulimit -d 65536 - limit datasize 64m -- -Depending on your shell, only one of these may succeed, but it will set -your process data segment limit much higher and perhaps allow the query -to complete. This command applies to the current process, and all -subprocesses created after the command is run. If you are having a problem -with the SQL client because the backend is returning too much data, try -it before starting the client.
- -
-
-From psql, type select version();
- -
-
-You need to put BEGIN WORK
and COMMIT
-
around any use of a large object handle, that is,
-surrounding lo_open
... lo_close.
- -Current PostgreSQL enforces the rule by closing large object handles at -transaction commit, which will be instantly upon completion of the -lo_open command if you are not inside a transaction. So the -first attempt to do anything with the handle will draw invalid large -obj descriptor. So code that used to work (at least most of the -time) will now generate that error message if you fail to use a -transaction.
-
-If you are using a client interface like ODBC you may need to set
-auto-commit off.
- -
-Use now():
-
-
- CREATE TABLE test (x int, modtime timestamp default now() );
-
-
IN
so
-slow?
-Currently, we join subqueries to outer queries by sequential scanning
-the result of the subquery for each row of the outer query. A workaround
-is to replace IN
with EXISTS
. For example,
-change:
-
-to:
-
- SELECT *
- FROM tab
- WHERE col1 IN (SELECT col2 FROM TAB2)
-
-We hope to fix this limitation in a future release.
-
-
- SELECT *
- FROM tab
- WHERE EXISTS (SELECT col2 FROM TAB2 WHERE col1 = col2)
-
-PostgreSQL does not support outer joins in the current release. They can -be simulated using UNION and NOT IN. For -example, when joining tab1 and tab2, the following query -does an outer join of the two tables: -
- SELECT tab1.col1, tab2.col2 - FROM tab1, tab2 - WHERE tab1.col1 = tab2.col1 - UNION ALL - SELECT tab1.col1, NULL - FROM tab1 - WHERE tab1.col1 NOT IN (SELECT tab2.col1 FROM tab2) - ORDER BY tab1.col1 -- -
- - -
- -The problem could be a number of things. Try testing your user-defined -function in a stand alone test program first. - -
- -You are pfree'ing something that was not palloc'ed. -Beware of mixing malloc/free and palloc/pfree. - - -
- - -Send your extensions to the pgsql-hackers mailing list, and they will -eventually end up in the contrib/ subdirectory.
- - -
- -This requires wizardry so extreme that the authors have never -tried it, though in principle it can be done.
- -
- -The Makefiles do not have the proper dependencies for include files. You -have to do a make clean and then another make. - You -have to do a make clean and then another make.
- - - - - diff --git a/doc/src/FAQ_DEV.html b/doc/src/FAQ_DEV.html deleted file mode 100644 index ba60c157c21..00000000000 --- a/doc/src/FAQ_DEV.html +++ /dev/null @@ -1,486 +0,0 @@ - -
--Last updated: Fri Jun 9 21:54:54 EDT 2000 -
-Current maintainer: Bruce Momjian (pgman@candle.pha.pa.us)
-
-The most recent version of this document can be viewed at -the postgreSQL Web site, http://PostgreSQL.org. -
-
- -
- -Aside from the User documentation mentioned in the regular FAQ, there -are several development tools available. First, all the files in the -/tools directory are designed for developers. - -
- RELEASE_CHANGES changes we have to make for each release - SQL_keywords standard SQL'92 keywords - backend description/flowchart of the backend directories - ccsym find standard defines made by your compiler - entab converts tabs to spaces, used by pgindent - find_static finds functions that could be made static - find_typedef get a list of typedefs in the source code - make_ctags make vi 'tags' file in each directory - make_diff make *.orig and diffs of source - make_etags make emacs 'etags' files - make_keywords.README make comparison of our keywords and SQL'92 - make_mkid make mkid ID files - mkldexport create AIX exports file - pgindent indents C source files - pginclude scripts for adding/removing include files - unused_oids in pgsql/src/include/catalog -- -Let me note some of these. If you point your browser at the -file:/usr/local/src/pgsql/src/tools/backend/index.html directory, -you will see few paragraphs describing the data flow, the backend -components in a flow chart, and a description of the shared memory area. -You can click on any flowchart box to see a description. If you then -click on the directory name, you will be taken to the source directory, -to browse the actual source code behind it. We also have several README -files in some source directories to describe the function of the module. - The browser will display these when you enter the directory also. The -tools/backend directory is also contained on our web page under -the title How PostgreSQL Processes a Query.
- - -Second, you really should have an editor that can handle tags, so you -can tag a function call to see the function definition, and then tag -inside that function to see an even lower-level function, and then back -out twice to return to the original function. Most editors support this -via tags or etags files.
- - -Third, you need to get id-utils from: -
- ftp://alpha.gnu.org/gnu/id-utils-3.2d.tar.gz - ftp://tug.org/gnu/id-utils-3.2d.tar.gz - ftp://ftp.enst.fr/pub/gnu/gnits/id-utils-3.2d.tar.gz -- -By running tools/make_mkid, an archive of source symbols can be -created that can be rapidly queried like grep or edited. Others -prefer glimpse.
- - -make_diff has tools to create patch diff files that can be -applied to the distribution.
-
-
-Our standard format is to indent each code level with one tab, where
-each tab is four spaces. You will need to set your editor to display
-tabs as four spaces:
-
-
- vi in ~/.exrc: - set tabstop=4 - set sw=4 - more: - more -x4 - less: - less -x4 - emacs: - M-x set-variable tab-width - or - ; Cmd to set tab stops &etc for working with PostgreSQL code - (c-add-style "pgsql" - '("bsd" - (indent-tabs-mode . t) - (c-basic-offset . 4) - (tab-width . 4) - (c-offsets-alist . - ((case-label . +)))) - t) ; t = set this mode on - - and add this to your autoload list (modify file path in macro): - - (setq auto-mode-alist - (cons '("\\`/usr/local/src/pgsql/.*\\.[chyl]\\'" . pgsql-c-mode) - auto-mode-alist)) - or - /* - * Local variables: - * tab-width: 4 - * c-indent-level: 4 - * c-basic-offset: 4 - * End: - */ --
-pgindent is run on all source files just before each beta test
-period. It auto-formats all source files to make them consistent.
-Comment blocks that need specific line breaks should be formatted as
-block comments, where the comment starts as
-/*------
. These comments will not be reformatted in any
-way.
-
-pginclude contains scripts used to add needed #include's to
-include files, and removed unneeded #include's.
-
-When adding system types, you will need to assign oids to them.
-There is also a script called unused_oids in
-pgsql/src/include/catalog that shows the unused oids.
-
-
- -I have four good books, An Introduction to Database Systems, by -C.J. Date, Addison, Wesley, A Guide to the SQL Standard, by C.J. -Date, et. al, Addison, Wesley, Fundamentals of Database Systems, -by Elmasri and Navathe, and Transaction Processing, by Jim Gray, -Morgan, Kaufmann
- -There is also a database performance site, with a handbook on-line -written by Jim Gray at http://www.benchmarkresources.com. - - - -
- -palloc() and pfree() are used in place of malloc() and -free() because we automatically free all memory allocated when a -transaction completes. This makes it easier to make sure we free memory -that gets allocated in one place, but only freed much later. There are -several contexts that memory can be allocated in, and this controls when -the allocated memory is automatically freed by the backend.
- - -
- -We do this because this allows a consistent way to pass data inside the -backend in a flexible way. Every node has a NodeTag which -specifies what type of data is inside the Node. Lists are groups -of Nodes chained together as a forward-linked list.
-Here are some of the List manipulation commands: -
--You can print nodes easily inside gdb. First, to disable -output truncation when you use the gdb print command: --
-- lfirst(i) -
- return the data at list element i. -
- lnext(i) -
- return the next list element after i. -
- foreach(i, list) -
- loop through list, assigning each list element to i. -It is important to note that i is a List *, not the data in the -List element. You need to use lfirst(i) to get at the data. -Here is a typical code snipped that loops through a List containing -Var *'s and processes each one: -
--- List *i, *list; - - foreach(i, list) - { - Var *var = lfirst(i); - - /* process var here */ - } -
-- lcons(node, list) -
- add node to the front of list, or create a new list with -node if list is NIL. -
- lappend(list, node) -
- add node to the end of list. This is more expensive -that lcons. -
- nconc(list1, list2) -
- Concat list2 on to the end of list1. -
- length(list) -
- return the length of the list. -
- nth(i, list) -
- return the i'th element in list. -
- lconsi, ... -
- There are integer versions of these: lconsi, lappendi, nthi. -List's containing integers instead of Node pointers are used to -hold list of relation object id's and other integer quantities. -
-
- (gdb) set print elements 0
-
-
-Instead of printing values in gdb format, you can use the next two
-commands to print out List, Node, and structure contents in a verbose
-format that is easier to understand. List's are unrolled into nodes,
-and nodes are printed in detail. The first prints in a short format,
-and the second in a long format:
-
-
- (gdb) call print(any_pointer)
- (gdb) call pprint(any_pointer)
-
-
-The output appears in the postmaster log file, or on your screen if you
-are running a backend directly without a postmaster.
-- -
- -The source code is over 250,000 lines. Many problems/features are -isolated to one specific area of the code. Others require knowledge of -much of the source. If you are confused about where to start, ask the -hackers list, and they will be glad to assess the complexity and give -pointers on where to start.
- -Another thing to keep in mind is that many fixes and features can be -added with surprisingly little code. I often start by adding code, then -looking at other areas in the code where similar things are done, and by -the time I am finished, the patch is quite small and compact.
- -When adding code, keep in mind that it should use the existing -facilities in the source, for performance reasons and for simplicity. -Often a review of existing code doing similar things is helpful.
- - -
- - -There are several ways to obtain the source tree. Occasional developers -can just get the most recent source tree snapshot from -ftp.postgresql.org. For regular developers, you can use CVS. CVS -allows you to download the source tree, then occasionally update your -copy of the source tree with any new changes. Using CVS, you don't have -to download the entire source each time, only the changed files. -Anonymous CVS does not allows developers to update the remote source -tree, though privileged developers can do this. There is a CVS FAQ on -our web site that describes how to use remote CVS. You can also use -CVSup, which has similarly functionality, and is available from -ftp.postgresql.org.
- -To update the source tree, there are two ways. You can generate a patch -against your current source tree, perhaps using the make_diff tools -mentioned above, and send them to the patches list. They will be -reviewed, and applied in a timely manner. If the patch is major, and we -are in beta testing, the developers may wait for the final release -before applying your patches.
- -For hard-core developers, Marc(scrappy@postgresql.org) will give you a -Unix shell account on postgresql.org, so you can use CVS to update the -main source tree, or you can ftp your files into your account, patch, -and cvs install the changes directly into the source tree.
- -
- -First, use psql to make sure it is working as you expect. Then -run src/test/regress and get the output of -src/test/regress/checkresults with and without your changes, to -see that your patch does not change the regression test in unexpected -ways. This practice has saved me many times. The regression tests test -the code in ways I would never do, and has caught many bugs in my -patches. By finding the problems now, you save yourself a lot of -debugging later when things are broken, and you can't figure out when it -happened.
- - -
- -The structures passing around from the parser, rewrite, optimizer, and -executor require quite a bit of support. Most structures have support -routines in src/backend/nodes used to create, copy, read, and output -those structures. Make sure you add support for your new field to these -files. Find any other places the structure may need code for your new -field. mkid is helpful with this (see above).
- - -
- -Table, column, type, function, and view names are stored in system -tables in columns of type Name. Name is a fixed-length, -null-terminated type of NAMEDATALEN bytes. (The default value -for NAMEDATALEN is 32 bytes.) - -
- typedef struct nameData
- {
- char data[NAMEDATALEN];
- } NameData;
- typedef NameData *Name;
-
-
-Table, column, type, function, and view names that come into the
-backend via user queries are stored as variable-length, null-terminated
-character strings.- -Many functions are called with both types of names, ie. heap_open(). -Because the Name type is null-terminated, it is safe to pass it to a -function expecting a char *. Because there are many cases where on-disk -names(Name) are compared to user-supplied names(char *), there are many -cases where Name and char * are used interchangeably.
- -
- -You first need to find the tuples(rows) you are interested in. There -are two ways. First, SearchSysCacheTuple() and related functions -allow you to query the system catalogs. This is the preferred way to -access system tables, because the first call to the cache loads the -needed rows, and future requests can return the results without -accessing the base table. The caches use system table indexes -to look up tuples. A list of available caches is located in -src/backend/utils/cache/syscache.c. -src/backend/utils/cache/lsyscache.c contains many column-specific -cache lookup functions.
- -The rows returned are cached-owned versions of the heap rows. They are -invalidated when the base table changes. Because the cache is local to -each backend, you may use the pointer returned from the cache for short -periods without making a copy of the tuple. If you send the pointer -into a large function that will be doing its own cache lookups, it is -possible the cache entry may be flushed, so you should use -SearchSysCacheTupleCopy() in these cases, and pfree() the -tuple when you are done.
- -If you can't use the system cache, you will need to retrieve the data -directly from the heap table, using the buffer cache that is shared by -all backends. The backend automatically takes care of loading the rows -into the buffer cache.
- -Open the table with heap_open(). You can then start a table scan -with heap_beginscan(), then use heap_getnext() and -continue as long as HeapTupleIsValid() returns true. Then do a -heap_endscan(). Keys can be assigned to the scan. -No indexes are used, so all rows are going to be compared to the keys, -and only the valid rows returned.
- -You can also use heap_fetch() to fetch rows by block -number/offset. While scans automatically lock/unlock rows from the -buffer cache, with heap_fetch(), you must pass a Buffer -pointer, and ReleaseBuffer() it when completed. - -Once you have the row, you can get data that is common to all tuples, -like t_self and t_oid, by merely accessing the -HeapTuple structure entries. - -If you need a table-specific column, you should take the HeapTuple -pointer, and use the GETSTRUCT() macro to access the -table-specific start of the tuple. You then cast the pointer as a -Form_pg_proc pointer if you are accessing the pg_proc table, or -Form_pg_type if you are accessing pg_type. You can then access -the columns by using a structure pointer: - -
-
- ((Form_pg_class) GETSTRUCT(tuple))->relnatts
-
-
-
-You should not directly change live tuples in this way. The best
-way is to use heap_tuplemodify() and pass it your palloc'ed
-tuple, and the values you want changed. It returns another palloc'ed
-tuple, which you pass to heap_replace().
-
-You can delete tuples by passing the tuple's t_self to
-heap_destroy(). You can use it for heap_update() too.
-
-Remember, tuples can be either system cache versions, which may go away
-soon after you get them, buffer cache versions, which go away when
-you heap_getnext(), heap_endscan, or
-ReleaseBuffer(), in the heap_fetch() case. Or it may be a
-palloc'ed tuple, that you must pfree() when finished.
-
-- -elog() is used to send messages to the front-end, and optionally -terminate the current query being processed. The first parameter is an -elog level of NOTICE, DEBUG, ERROR, or -FATAL. - -NOTICE prints on the user's terminal and the postmaster logs. -DEBUG prints only in the postmaster logs. ERROR prints in -both places, and terminates the current query, never returning from the call. -FATAL terminates the backend process. - -The remaining parameters of elog are a printf-style set of -parameters to print. - -
- -The files configure and configure.in are part of the -GNU autoconf package. Configure allows us to test for various -capabilities of the OS, and to set variables that can then be tested in -C programs and Makefiles. Autoconf is installed on the PostgreSQL main -server. To add options to configure, edit configure.in, and then -run autoconf to generate configure.
- -When configure is run by the user, it tests various OS -capabilities, stores those in config.status and -config.cache, and modifies a list of *.in files. For -example, if there exists a Makefile.in, configure generates a -Makefile that contains substitutions for all @var@ parameters -found by configure.
- -When you need to edit files, make sure you don't waste time modifying -files generated by configure. Edit the *.in file, and -re-run configure to recreate the needed file. If you run make -distclean from the top-level source directory, all files derived by -configure are removed, so you see only the file contained in the source -distribution.
- -
- -There are a variety of places that need to be modified to add a new -port. First, start in the src/template directory. Add an -appropriate entry for your OS. Also, use src/config.guess to add -your OS to src/template/.similar. You shouldn't match the OS -version exactly. The configure test will look for an exact OS -version number, and if not found, find a match without version number. -Edit src/configure.in to add your new OS. (See configure item -above.) You will need to run autoconf, or patch src/configure -too.
- -Then, check src/include/port and add your new OS file, with -appropriate values. Hopefully, there is already locking code in -src/include/storage/s_lock.h for your CPU. There is also a -src/makefiles directory for port-specific Makefile handling. -There is a backend/port directory if you need special files for -your OS.
- - - -