mirror of
				https://github.com/postgres/postgres.git
				synced 2025-10-31 10:30:33 +03:00 
			
		
		
		
	
		
			
				
	
	
		
			708 lines
		
	
	
		
			33 KiB
		
	
	
	
		
			Plaintext
		
	
	
	
	
	
			
		
		
	
	
			708 lines
		
	
	
		
			33 KiB
		
	
	
	
		
			Plaintext
		
	
	
	
	
	
| 
 | |
|           Developer's Frequently Asked Questions (FAQ) for PostgreSQL
 | |
|                                        
 | |
|    Last updated: Fri Feb 14 08:59:10 EST 2003
 | |
|    
 | |
|    Current maintainer: Bruce Momjian (pgman@candle.pha.pa.us)
 | |
|    
 | |
|    The most recent version of this document can be viewed at the
 | |
|    postgreSQL Web site, http://www.PostgreSQL.org.
 | |
|      _________________________________________________________________
 | |
|    
 | |
|                              General Questions
 | |
|                                       
 | |
|    1.1) How do I get involved in PostgreSQL development?
 | |
|    1.2) How do I add a feature or fix a bug?
 | |
|    1.3) How do I download/update the current source tree?
 | |
|    1.4) How do I test my changes?
 | |
|    1.5) What tools are available for developers?
 | |
|    1.6) What books are good for developers?
 | |
|    1.7) What is configure all about?
 | |
|    1.8) How do I add a new port?
 | |
|    1.9) Why don't you use threads/raw devices/async-I/O, <insert your
 | |
|    favorite wizz-bang feature here>?
 | |
|    1.10) How are RPM's packaged?
 | |
|    1.11) How are CVS branches handled?
 | |
|    1.12) Where can I get a copy of the SQL standards?
 | |
|    
 | |
|                             Technical Questions
 | |
|                                       
 | |
|    2.1) How do I efficiently access information in tables from the
 | |
|    backend code?
 | |
|    2.2) Why are table, column, type, function, view names sometimes
 | |
|    referenced as Name or NameData, and sometimes as char *?
 | |
|    2.3) Why do we use Node and List to make data structures?
 | |
|    2.4) I just added a field to a structure. What else should I do?
 | |
|    2.5) Why do we use palloc() and pfree() to allocate memory?
 | |
|    2.6) What is elog()?
 | |
|    2.7) What is CommandCounterIncrement()?
 | |
|      _________________________________________________________________
 | |
|    
 | |
|                              General Questions
 | |
|                                       
 | |
|   1.1) How go I get involved in PostgreSQL development?
 | |
|   
 | |
|    This was written by Lamar Owen:
 | |
|    
 | |
|    2001-06-22
 | |
|    What open source development process is used by the PostgreSQL team?
 | |
|    
 | |
|    Read HACKERS for six months (or a full release cycle, whichever is
 | |
|    longer). Really. HACKERS _is_the process. The process is not well
 | |
|    documented (AFAIK -- it may be somewhere that I am not aware of) --
 | |
|    and it changes continually.
 | |
|    What development environment (OS, system, compilers, etc) is required
 | |
|    to develop code?
 | |
|    
 | |
|    Developers Corner on the website has links to this information. The
 | |
|    distribution tarball itself includes all the extra tools and documents
 | |
|    that go beyond a good Unix-like development environment. In general, a
 | |
|    modern unix with a modern gcc, GNU make or equivalent, autoconf (of a
 | |
|    particular version), and good working knowledge of those tools are
 | |
|    required.
 | |
|    What areas need support?
 | |
|    
 | |
|    The TODO list.
 | |
|    
 | |
|    You've made the first step, by finding and subscribing to HACKERS.
 | |
|    Once you find an area to look at in the TODO, and have read the
 | |
|    documentation on the internals, etc, then you check out a current
 | |
|    CVS,write what you are going to write (keeping your CVS checkout up to
 | |
|    date in the process), and make up a patch (as a context diff only) and
 | |
|    send to the PATCHES list, prefereably.
 | |
|    
 | |
|    Discussion on the patch typically happens here. If the patch adds a
 | |
|    major feature, it would be a good idea to talk about it first on the
 | |
|    HACKERS list, in order to increase the chances of it being accepted,
 | |
|    as well as toavoid duplication of effort. Note that experienced
 | |
|    developers with a proven track record usually get the big jobs -- for
 | |
|    more than one reason. Also note that PostgreSQL is highly portable --
 | |
|    nonportable code will likely be dismissed out of hand.
 | |
|    
 | |
|    Once your contributions get accepted, things move from there.
 | |
|    Typically, you would be added as a developer on the list on the
 | |
|    website when one of the other developers recommends it. Membership on
 | |
|    the steering committee is by invitation only, by the other steering
 | |
|    committee members, from what I have gathered watching froma distance.
 | |
|    
 | |
|    I make these statements from having watched the process for over two
 | |
|    years.
 | |
|    
 | |
|    To see a good example of how one goes about this, search the archives
 | |
|    for the name 'Tom Lane' and see what his first post consisted of, and
 | |
|    where he took things. In particular, note that this hasn't been _that_
 | |
|    long ago -- and his bugfixing and general deep knowledge with this
 | |
|    codebase is legendary. Take a few days to read after him. And pay
 | |
|    special attention to both the sheer quantity as well as the
 | |
|    painstaking quality of his work. Both are in high demand.
 | |
|    
 | |
|   1.2) How do I add a feature or fix a bug?
 | |
|   
 | |
|    The source code is over 350,000 lines. Many fixes/features are
 | |
|    isolated to one specific area of the code. Others require knowledge of
 | |
|    much of the source. If you are confused about where to start, ask the
 | |
|    hackers list, and they will be glad to assess the complexity and give
 | |
|    pointers on where to start.
 | |
|    
 | |
|    Another thing to keep in mind is that many fixes and features can be
 | |
|    added with surprisingly little code. I often start by adding code,
 | |
|    then looking at other areas in the code where similar things are done,
 | |
|    and by the time I am finished, the patch is quite small and compact.
 | |
|    
 | |
|    When adding code, keep in mind that it should use the existing
 | |
|    facilities in the source, for performance reasons and for simplicity.
 | |
|    Often a review of existing code doing similar things is helpful.
 | |
|    
 | |
|    The usual process for source additions is:
 | |
|      * Review the TODO list.
 | |
|      * Discuss hackers the desirability of the fix/feature.
 | |
|      * How should it behave in complex circumstances?
 | |
|      * How should it be implemented?
 | |
|      * Submit the patch to the patches list.
 | |
|      * Answer email questions.
 | |
|      * Wait for the patch to be applied.
 | |
|        
 | |
|   1.3) How do I download/update the current source tree?
 | |
|   
 | |
|    There are several ways to obtain the source tree. Occasional
 | |
|    developers can just get the most recent source tree snapshot from
 | |
|    ftp.postgresql.org. For regular developers, you can use CVS. CVS
 | |
|    allows you to download the source tree, then occasionally update your
 | |
|    copy of the source tree with any new changes. Using CVS, you don't
 | |
|    have to download the entire source each time, only the changed files.
 | |
|    Anonymous CVS does not allows developers to update the remote source
 | |
|    tree, though privileged developers can do this. There is a CVS FAQ on
 | |
|    our web site that describes how to use remote CVS. You can also use
 | |
|    CVSup, which has similarly functionality, and is available from
 | |
|    ftp.postgresql.org.
 | |
|    
 | |
|    To update the source tree, there are two ways. You can generate a
 | |
|    patch against your current source tree, perhaps using the make_diff
 | |
|    tools mentioned above, and send them to the patches list. They will be
 | |
|    reviewed, and applied in a timely manner. If the patch is major, and
 | |
|    we are in beta testing, the developers may wait for the final release
 | |
|    before applying your patches.
 | |
|    
 | |
|    For hard-core developers, Marc(scrappy@postgresql.org) will give you a
 | |
|    Unix shell account on postgresql.org, so you can use CVS to update the
 | |
|    main source tree, or you can ftp your files into your account, patch,
 | |
|    and cvs install the changes directly into the source tree.
 | |
|    
 | |
|   1.4) How do I test my changes?
 | |
|   
 | |
|    First, use psql to make sure it is working as you expect. Then run
 | |
|    src/test/regress and get the output of src/test/regress/checkresults
 | |
|    with and without your changes, to see that your patch does not change
 | |
|    the regression test in unexpected ways. This practice has saved me
 | |
|    many times. The regression tests test the code in ways I would never
 | |
|    do, and has caught many bugs in my patches. By finding the problems
 | |
|    now, you save yourself a lot of debugging later when things are
 | |
|    broken, and you can't figure out when it happened.
 | |
|    
 | |
|   1.5) What tools are available for developers?
 | |
|   
 | |
|    Aside from the User documentation mentioned in the regular FAQ, there
 | |
|    are several development tools available. First, all the files in the
 | |
|    /tools directory are designed for developers.
 | |
|     RELEASE_CHANGES changes we have to make for each release
 | |
|     SQL_keywords    standard SQL'92 keywords
 | |
|     backend         description/flowchart of the backend directories
 | |
|     ccsym           find standard defines made by your compiler
 | |
|     entab           converts tabs to spaces, used by pgindent
 | |
|     find_static     finds functions that could be made static
 | |
|     find_typedef    finds typedefs in the source code
 | |
|     find_badmacros  finds macros that use braces incorrectly
 | |
|     make_ctags      make vi 'tags' file in each directory
 | |
|     make_diff       make *.orig and diffs of source
 | |
|     make_etags      make emacs 'etags' files
 | |
|     make_keywords   make comparison of our keywords and SQL'92
 | |
|     make_mkid       make mkid ID files
 | |
|     mkldexport      create AIX exports file
 | |
|     pgindent        indents C source files
 | |
|     pgjindent       indents Java source files
 | |
|     pginclude       scripts for adding/removing include files
 | |
|     unused_oids     in pgsql/src/include/catalog
 | |
| 
 | |
|    Let me note some of these. If you point your browser at the
 | |
|    file:/usr/local/src/pgsql/src/tools/backend/index.html directory, you
 | |
|    will see few paragraphs describing the data flow, the backend
 | |
|    components in a flow chart, and a description of the shared memory
 | |
|    area. You can click on any flowchart box to see a description. If you
 | |
|    then click on the directory name, you will be taken to the source
 | |
|    directory, to browse the actual source code behind it. We also have
 | |
|    several README files in some source directories to describe the
 | |
|    function of the module. The browser will display these when you enter
 | |
|    the directory also. The tools/backend directory is also contained on
 | |
|    our web page under the title How PostgreSQL Processes a Query.
 | |
|    
 | |
|    Second, you really should have an editor that can handle tags, so you
 | |
|    can tag a function call to see the function definition, and then tag
 | |
|    inside that function to see an even lower-level function, and then
 | |
|    back out twice to return to the original function. Most editors
 | |
|    support this via tags or etags files.
 | |
|    
 | |
|    Third, you need to get id-utils from:
 | |
|     ftp://alpha.gnu.org/gnu/id-utils-3.2d.tar.gz
 | |
|     ftp://tug.org/gnu/id-utils-3.2d.tar.gz
 | |
|     ftp://ftp.enst.fr/pub/gnu/gnits/id-utils-3.2d.tar.gz
 | |
| 
 | |
|    By running tools/make_mkid, an archive of source symbols can be
 | |
|    created that can be rapidly queried like grep or edited. Others prefer
 | |
|    glimpse.
 | |
|    
 | |
|    make_diff has tools to create patch diff files that can be applied to
 | |
|    the distribution. This produces context diffs, which is our preferred
 | |
|    format.
 | |
|    
 | |
|    Our standard format is to indent each code level with one tab, where
 | |
|    each tab is four spaces. You will need to set your editor to display
 | |
|    tabs as four spaces:
 | |
|     vi in ~/.exrc:
 | |
|             set tabstop=4
 | |
|             set sw=4
 | |
|     more:
 | |
|             more -x4
 | |
|     less:
 | |
|             less -x4
 | |
|     emacs:
 | |
|         M-x set-variable tab-width
 | |
|         or
 | |
|         ; Cmd to set tab stops & indenting for working with PostgreSQL code
 | |
|              (c-add-style "pgsql"
 | |
|                       '("bsd"
 | |
|                                  (indent-tabs-mode . t)
 | |
|                                  (c-basic-offset   . 4)
 | |
|                                  (tab-width . 4)
 | |
|                                  (c-offsets-alist .
 | |
|                                             ((case-label . +))))
 | |
|                        t) ; t = set this mode on
 | |
| 
 | |
|         and add this to your autoload list (modify file path in macro):
 | |
| 
 | |
|         (setq auto-mode-alist
 | |
|               (cons '("\\`/usr/local/src/pgsql/.*\\.[chyl]\\'" . pgsql-c-mode)
 | |
|             auto-mode-alist))
 | |
|         or
 | |
|             /*
 | |
|              * Local variables:
 | |
|              *  tab-width: 4
 | |
|              *  c-indent-level: 4
 | |
|              *  c-basic-offset: 4
 | |
|              * End:
 | |
|              */
 | |
| 
 | |
|    pgindent will the format code by specifying flags to your operating
 | |
|    system's utility indent. This article describes the value of a
 | |
|    constent coding style.
 | |
|    
 | |
|    pgindent is run on all source files just before each beta test period.
 | |
|    It auto-formats all source files to make them consistent. Comment
 | |
|    blocks that need specific line breaks should be formatted as block
 | |
|    comments, where the comment starts as /*------. These comments will
 | |
|    not be reformatted in any way.
 | |
|    
 | |
|    pginclude contains scripts used to add needed #include's to include
 | |
|    files, and removed unneeded #include's.
 | |
|    
 | |
|    When adding system types, you will need to assign oids to them. There
 | |
|    is also a script called unused_oids in pgsql/src/include/catalog that
 | |
|    shows the unused oids.
 | |
|    
 | |
|   1.6) What books are good for developers?
 | |
|   
 | |
|    I have four good books, An Introduction to Database Systems, by C.J.
 | |
|    Date, Addison, Wesley, A Guide to the SQL Standard, by C.J. Date, et.
 | |
|    al, Addison, Wesley, Fundamentals of Database Systems, by Elmasri and
 | |
|    Navathe, and Transaction Processing, by Jim Gray, Morgan, Kaufmann
 | |
|    
 | |
|    There is also a database performance site, with a handbook on-line
 | |
|    written by Jim Gray at http://www.benchmarkresources.com.
 | |
|    
 | |
|   1.7) What is configure all about?
 | |
|   
 | |
|    The files configure and configure.in are part of the GNU autoconf
 | |
|    package. Configure allows us to test for various capabilities of the
 | |
|    OS, and to set variables that can then be tested in C programs and
 | |
|    Makefiles. Autoconf is installed on the PostgreSQL main server. To add
 | |
|    options to configure, edit configure.in, and then run autoconf to
 | |
|    generate configure.
 | |
|    
 | |
|    When configure is run by the user, it tests various OS capabilities,
 | |
|    stores those in config.status and config.cache, and modifies a list of
 | |
|    *.in files. For example, if there exists a Makefile.in, configure
 | |
|    generates a Makefile that contains substitutions for all @var@
 | |
|    parameters found by configure.
 | |
|    
 | |
|    When you need to edit files, make sure you don't waste time modifying
 | |
|    files generated by configure. Edit the *.in file, and re-run configure
 | |
|    to recreate the needed file. If you run make distclean from the
 | |
|    top-level source directory, all files derived by configure are
 | |
|    removed, so you see only the file contained in the source
 | |
|    distribution.
 | |
|    
 | |
|   1.8) How do I add a new port?
 | |
|   
 | |
|    There are a variety of places that need to be modified to add a new
 | |
|    port. First, start in the src/template directory. Add an appropriate
 | |
|    entry for your OS. Also, use src/config.guess to add your OS to
 | |
|    src/template/.similar. You shouldn't match the OS version exactly. The
 | |
|    configure test will look for an exact OS version number, and if not
 | |
|    found, find a match without version number. Edit src/configure.in to
 | |
|    add your new OS. (See configure item above.) You will need to run
 | |
|    autoconf, or patch src/configure too.
 | |
|    
 | |
|    Then, check src/include/port and add your new OS file, with
 | |
|    appropriate values. Hopefully, there is already locking code in
 | |
|    src/include/storage/s_lock.h for your CPU. There is also a
 | |
|    src/makefiles directory for port-specific Makefile handling. There is
 | |
|    a backend/port directory if you need special files for your OS.
 | |
|    
 | |
|   1.9) Why don't you use threads/raw devices/async-I/O, <insert your favorite
 | |
|   wizz-bang feature here>?
 | |
|   
 | |
|    There is always a temptation to use the newest operating system
 | |
|    features as soon as they arrive. We resist that temptation.
 | |
|    
 | |
|    First, we support 15+ operating systems, so any new feature has to be
 | |
|    well established before we will consider it. Second, most new
 | |
|    wizz-bang features don't provide dramatic improvements. Third, they
 | |
|    usually have some downside, such as decreased reliability or
 | |
|    additional code required. Therefore, we don't rush to use new features
 | |
|    but rather wait for the feature to be established, then ask for
 | |
|    testing to show that a measurable improvement is possible.
 | |
|    
 | |
|    As an example, threads are not currently used in the backend code
 | |
|    because:
 | |
|      * Historically, threads were unsupported and buggy.
 | |
|      * An error in one backend can corrupt other backends.
 | |
|      * Speed improvements using threads are small compared to the
 | |
|        remaining backend startup time.
 | |
|      * The backend code would be more complex.
 | |
|        
 | |
|    So, we are not ignorant of new features. It is just that we are
 | |
|    cautious about their adoption. The TODO list often contains links to
 | |
|    discussions showing our reasoning in these areas.
 | |
|    
 | |
|   1.10) How are RPM's packaged?
 | |
|   
 | |
|    This was written by Lamar Owen:
 | |
|    
 | |
|    2001-05-03
 | |
|    
 | |
|    As to how the RPMs are built -- to answer that question sanely
 | |
|    requires me to know how much experience you have with the whole RPM
 | |
|    paradigm. 'How is the RPM built?' is a multifaceted question. The
 | |
|    obvious simple answer is that I maintain:
 | |
|     1. A set of patches to make certain portions of the source tree
 | |
|        'behave' in the different environment of the RPMset;
 | |
|     2. The initscript;
 | |
|     3. Any other ancilliary scripts and files;
 | |
|     4. A README.rpm-dist document that tries to adequately document both
 | |
|        the differences between the RPM build and the WHY of the
 | |
|        differences, as well as useful RPM environment operations (like,
 | |
|        using syslog, upgrading, getting postmaster to start at OS boot,
 | |
|        etc);
 | |
|     5. The spec file that throws it all together. This is not a trivial
 | |
|        undertaking in a package of this size.
 | |
|        
 | |
|    I then download and build on as many different canonical distributions
 | |
|    as I can -- currently I am able to build on Red Hat 6.2, 7.0, and 7.1
 | |
|    on my personal hardware. Occasionally I receive opportunity from
 | |
|    certain commercial enterprises such as Great Bridge and PostgreSQL,
 | |
|    Inc. to build on other distributions.
 | |
|    
 | |
|    I test the build by installing the resulting packages and running the
 | |
|    regression tests. Once the build passes these tests, I upload to the
 | |
|    postgresql.org ftp server and make a release announcement. I am also
 | |
|    responsible for maintaining the RPM download area on the ftp site.
 | |
|    
 | |
|    You'll notice I said 'canonical' distributions above. That simply
 | |
|    means that the machine is as stock 'out of the box' as practical --
 | |
|    that is, everything (except select few programs) on these boxen are
 | |
|    installed by RPM; only official Red Hat released RPMs are used (except
 | |
|    in unusual circumstances involving software that will not alter the
 | |
|    build -- for example, installing a newer non-RedHat version of the Dia
 | |
|    diagramming package is OK -- installing Python 2.1 on the box that has
 | |
|    Python 1.5.2 installed is not, as that alters the PostgreSQL build).
 | |
|    The RPM as uploaded is built to as close to out-of-the-box pristine as
 | |
|    is possible. Only the standard released 'official to that release'
 | |
|    compiler is used -- and only the standard official kernel is used as
 | |
|    well.
 | |
|    
 | |
|    For a time I built on Mandrake for RedHat consumption -- no more.
 | |
|    Nonstandard RPM building systems are worse than useless. Which is not
 | |
|    to say that Mandrake is useless! By no means is Mandrake useless --
 | |
|    unless you are building Red Hat RPMs -- and Red Hat is useless if
 | |
|    you're trying to build Mandrake or SuSE RPMs, for that matter. But I
 | |
|    would be foolish to use 'Lamar Owen's Super Special RPM Blend Distro
 | |
|    0.1.2' to build for public consumption! :-)
 | |
|    
 | |
|    I _do_ attempt to make the _source_ RPM compatible with as many
 | |
|    distributions as possible -- however, since I have limited resources
 | |
|    (as a volunteer RPM maintainer) I am limited as to the amount of
 | |
|    testing said build will get on other distributions, architectures, or
 | |
|    systems.
 | |
|    
 | |
|    And, while I understand people's desire to immediately upgrade to the
 | |
|    newest version, realize that I do this as a side interest -- I have a
 | |
|    regular, full-time job as a broadcast
 | |
|    engineer/webmaster/sysadmin/Technical Director which occasionally
 | |
|    prevents me from making timely RPM releases. This happened during the
 | |
|    early part of the 7.1 beta cycle -- but I believe I was pretty much on
 | |
|    the ball for the Release Candidates and the final release.
 | |
|    
 | |
|    I am working towards a more open RPM distribution -- I would dearly
 | |
|    love to more fully document the process and put everything into CVS --
 | |
|    once I figure out how I want to represent things such as the spec file
 | |
|    in a CVS form. It makes no sense to maintain a changelog, for
 | |
|    instance, in the spec file in CVS when CVS does a better job of
 | |
|    changelogs -- I will need to write a tool to generate a real spec file
 | |
|    from a CVS spec-source file that would add version numbers, changelog
 | |
|    entries, etc to the result before building the RPM. IOW, I need to
 | |
|    rethink the process -- and then go through the motions of putting my
 | |
|    long RPM history into CVS one version at a time so that version
 | |
|    history information isn't lost.
 | |
|    
 | |
|    As to why all these files aren't part of the source tree, well, unless
 | |
|    there was a large cry for it to happen, I don't believe it should.
 | |
|    PostgreSQL is very platform-agnostic -- and I like that. Including the
 | |
|    RPM stuff as part of the Official Tarball (TM) would, IMHO, slant that
 | |
|    agnostic stance in a negative way. But maybe I'm too sensitive to
 | |
|    that. I'm not opposed to doing that if that is the consensus of the
 | |
|    core group -- and that would be a sneaky way to get the stuff into CVS
 | |
|    :-). But if the core group isn't thrilled with the idea (and my
 | |
|    instinct says they're not likely to be), I am opposed to the idea --
 | |
|    not to keep the stuff to myself, but to not hinder the
 | |
|    platform-neutral stance. IMHO, of course.
 | |
|    
 | |
|    Of course, there are many projects that DO include all the files
 | |
|    necessary to build RPMs from their Official Tarball (TM).
 | |
|    
 | |
|   1.11) How are CVS branches managed?
 | |
|   
 | |
|    This was written by Tom Lane:
 | |
|    
 | |
|    2001-05-07
 | |
|    
 | |
|    If you just do basic "cvs checkout", "cvs update", "cvs commit", then
 | |
|    you'll always be dealing with the HEAD version of the files in CVS.
 | |
|    That's what you want for development, but if you need to patch past
 | |
|    stable releases then you have to be able to access and update the
 | |
|    "branch" portions of our CVS repository. We normally fork off a branch
 | |
|    for a stable release just before starting the development cycle for
 | |
|    the next release.
 | |
|    
 | |
|    The first thing you have to know is the branch name for the branch you
 | |
|    are interested in getting at. To do this, look at some long-lived
 | |
|    file, say the top-level HISTORY file, with "cvs status -v" to see what
 | |
|    the branch names are. (Thanks to Ian Lance Taylor for pointing out
 | |
|    that this is the easiest way to do it.) Typical branch names are:
 | |
|     REL7_1_STABLE
 | |
|     REL7_0_PATCHES
 | |
|     REL6_5_PATCHES
 | |
| 
 | |
|    OK, so how do you do work on a branch? By far the best way is to
 | |
|    create a separate checkout tree for the branch and do your work in
 | |
|    that. Not only is that the easiest way to deal with CVS, but you
 | |
|    really need to have the whole past tree available anyway to test your
 | |
|    work. (And you *better* test your work. Never forget that dot-releases
 | |
|    tend to go out with very little beta testing --- so whenever you
 | |
|    commit an update to a stable branch, you'd better be doubly sure that
 | |
|    it's correct.)
 | |
|    
 | |
|    Normally, to checkout the head branch, you just cd to the place you
 | |
|    want to contain the toplevel "pgsql" directory and say
 | |
|     cvs ... checkout pgsql
 | |
| 
 | |
|    To get a past branch, you cd to whereever you want it and say
 | |
|     cvs ... checkout -r BRANCHNAME pgsql
 | |
| 
 | |
|    For example, just a couple days ago I did
 | |
|     mkdir ~postgres/REL7_1
 | |
|     cd ~postgres/REL7_1
 | |
|     cvs ... checkout -r REL7_1_STABLE pgsql
 | |
| 
 | |
|    and now I have a maintenance copy of 7.1.*.
 | |
|    
 | |
|    When you've done a checkout in this way, the branch name is "sticky":
 | |
|    CVS automatically knows that this directory tree is for the branch,
 | |
|    and whenever you do "cvs update" or "cvs commit" in this tree, you'll
 | |
|    fetch or store the latest version in the branch, not the head version.
 | |
|    Easy as can be.
 | |
|    
 | |
|    So, if you have a patch that needs to apply to both the head and a
 | |
|    recent stable branch, you have to make the edits and do the commit
 | |
|    twice, once in your development tree and once in your stable branch
 | |
|    tree. This is kind of a pain, which is why we don't normally fork the
 | |
|    tree right away after a major release --- we wait for a dot-release or
 | |
|    two, so that we won't have to double-patch the first wave of fixes.
 | |
|    
 | |
|   1.12) Where can I get a copy of the SQL standards?
 | |
|   
 | |
|    There are two pertinent standards, SQL92 and SQL99. These standards
 | |
|    are endorsed by ANSI and ISO. A draft of the SQL92 standard is
 | |
|    available at http://www.contrib.andrew.cmu.edu/~shadow/. The SQL99
 | |
|    standard must be purchased from ANSI at
 | |
|    http://webstore.ansi.org/ansidocstore/default.asp. The main standards
 | |
|    documents are ANSI X3.135-1992 for SQL92 and ANSI/ISO/IEC 9075-2-1999
 | |
|    for SQL99.
 | |
|    
 | |
|    A summary of these standards is at
 | |
|    http://dbs.uni-leipzig.de/en/lokal/standards.pdf and
 | |
|    http://db.konkuk.ac.kr/present/SQL3.pdf.
 | |
|    
 | |
|                             Technical Questions
 | |
|                                       
 | |
|   2.1) How do I efficiently access information in tables from the backend code?
 | |
|   
 | |
|    You first need to find the tuples(rows) you are interested in. There
 | |
|    are two ways. First, SearchSysCache() and related functions allow you
 | |
|    to query the system catalogs. This is the preferred way to access
 | |
|    system tables, because the first call to the cache loads the needed
 | |
|    rows, and future requests can return the results without accessing the
 | |
|    base table. The caches use system table indexes to look up tuples. A
 | |
|    list of available caches is located in
 | |
|    src/backend/utils/cache/syscache.c.
 | |
|    src/backend/utils/cache/lsyscache.c contains many column-specific
 | |
|    cache lookup functions.
 | |
|    
 | |
|    The rows returned are cache-owned versions of the heap rows.
 | |
|    Therefore, you must not modify or delete the tuple returned by
 | |
|    SearchSysCache(). What you should do is release it with
 | |
|    ReleaseSysCache() when you are done using it; this informs the cache
 | |
|    that it can discard that tuple if necessary. If you neglect to call
 | |
|    ReleaseSysCache(), then the cache entry will remain locked in the
 | |
|    cache until end of transaction, which is tolerable but not very
 | |
|    desirable.
 | |
|    
 | |
|    If you can't use the system cache, you will need to retrieve the data
 | |
|    directly from the heap table, using the buffer cache that is shared by
 | |
|    all backends. The backend automatically takes care of loading the rows
 | |
|    into the buffer cache.
 | |
|    
 | |
|    Open the table with heap_open(). You can then start a table scan with
 | |
|    heap_beginscan(), then use heap_getnext() and continue as long as
 | |
|    HeapTupleIsValid() returns true. Then do a heap_endscan(). Keys can be
 | |
|    assigned to the scan. No indexes are used, so all rows are going to be
 | |
|    compared to the keys, and only the valid rows returned.
 | |
|    
 | |
|    You can also use heap_fetch() to fetch rows by block number/offset.
 | |
|    While scans automatically lock/unlock rows from the buffer cache, with
 | |
|    heap_fetch(), you must pass a Buffer pointer, and ReleaseBuffer() it
 | |
|    when completed.
 | |
|    
 | |
|    Once you have the row, you can get data that is common to all tuples,
 | |
|    like t_self and t_oid, by merely accessing the HeapTuple structure
 | |
|    entries. If you need a table-specific column, you should take the
 | |
|    HeapTuple pointer, and use the GETSTRUCT() macro to access the
 | |
|    table-specific start of the tuple. You then cast the pointer as a
 | |
|    Form_pg_proc pointer if you are accessing the pg_proc table, or
 | |
|    Form_pg_type if you are accessing pg_type. You can then access the
 | |
|    columns by using a structure pointer:
 | |
| ((Form_pg_class) GETSTRUCT(tuple))->relnatts
 | |
| 
 | |
|    You must not directly change live tuples in this way. The best way is
 | |
|    to use heap_modifytuple() and pass it your original tuple, and the
 | |
|    values you want changed. It returns a palloc'ed tuple, which you pass
 | |
|    to heap_replace(). You can delete tuples by passing the tuple's t_self
 | |
|    to heap_destroy(). You use t_self for heap_update() too. Remember,
 | |
|    tuples can be either system cache copies, which may go away after you
 | |
|    call ReleaseSysCache(), or read directly from disk buffers, which go
 | |
|    away when you heap_getnext(), heap_endscan, or ReleaseBuffer(), in the
 | |
|    heap_fetch() case. Or it may be a palloc'ed tuple, that you must
 | |
|    pfree() when finished.
 | |
|    
 | |
|   2.2) Why are table, column, type, function, view names sometimes referenced
 | |
|   as Name or NameData, and sometimes as char *?
 | |
|   
 | |
|    Table, column, type, function, and view names are stored in system
 | |
|    tables in columns of type Name. Name is a fixed-length,
 | |
|    null-terminated type of NAMEDATALEN bytes. (The default value for
 | |
|    NAMEDATALEN is 64 bytes.)
 | |
| typedef struct nameData
 | |
|     {
 | |
|         char        data[NAMEDATALEN];
 | |
|     } NameData;
 | |
|     typedef NameData *Name;
 | |
| 
 | |
|    Table, column, type, function, and view names that come into the
 | |
|    backend via user queries are stored as variable-length,
 | |
|    null-terminated character strings.
 | |
|    
 | |
|    Many functions are called with both types of names, ie. heap_open().
 | |
|    Because the Name type is null-terminated, it is safe to pass it to a
 | |
|    function expecting a char *. Because there are many cases where
 | |
|    on-disk names(Name) are compared to user-supplied names(char *), there
 | |
|    are many cases where Name and char * are used interchangeably.
 | |
|    
 | |
|   2.3) Why do we use Node and List to make data structures?
 | |
|   
 | |
|    We do this because this allows a consistent way to pass data inside
 | |
|    the backend in a flexible way. Every node has a NodeTag which
 | |
|    specifies what type of data is inside the Node. Lists are groups of
 | |
|    Nodes chained together as a forward-linked list.
 | |
|    
 | |
|    Here are some of the List manipulation commands:
 | |
|    
 | |
|    lfirst(i)
 | |
|           return the data at list element i.
 | |
|           
 | |
|    lnext(i)
 | |
|           return the next list element after i.
 | |
|           
 | |
|    foreach(i, list)
 | |
|           loop through list, assigning each list element to i. It is
 | |
|           important to note that i is a List *, not the data in the List
 | |
|           element. You need to use lfirst(i) to get at the data. Here is
 | |
|           a typical code snippet that loops through a List containing Var
 | |
|           *'s and processes each one:
 | |
|           
 | |
| List *i, *list;
 | |
| 
 | |
|     foreach(i, list)
 | |
|     {
 | |
|         Var *var = lfirst(i);
 | |
| 
 | |
|         /* process var here */
 | |
|     }
 | |
| 
 | |
|    lcons(node, list)
 | |
|           add node to the front of list, or create a new list with node
 | |
|           if list is NIL.
 | |
|           
 | |
|    lappend(list, node)
 | |
|           add node to the end of list. This is more expensive that lcons.
 | |
|           
 | |
|    nconc(list1, list2)
 | |
|           Concat list2 on to the end of list1.
 | |
|           
 | |
|    length(list)
 | |
|           return the length of the list.
 | |
|           
 | |
|    nth(i, list)
 | |
|           return the i'th element in list.
 | |
|           
 | |
|    lconsi, ...
 | |
|           There are integer versions of these: lconsi, lappendi, etc.
 | |
|           Also versions for OID lists: lconso, lappendo, etc.
 | |
|           
 | |
|    You can print nodes easily inside gdb. First, to disable output
 | |
|    truncation when you use the gdb print command:
 | |
| (gdb) set print elements 0
 | |
| 
 | |
|    Instead of printing values in gdb format, you can use the next two
 | |
|    commands to print out List, Node, and structure contents in a verbose
 | |
|    format that is easier to understand. List's are unrolled into nodes,
 | |
|    and nodes are printed in detail. The first prints in a short format,
 | |
|    and the second in a long format:
 | |
| (gdb) call print(any_pointer)
 | |
|     (gdb) call pprint(any_pointer)
 | |
| 
 | |
|    The output appears in the postmaster log file, or on your screen if
 | |
|    you are running a backend directly without a postmaster.
 | |
|    
 | |
|   2.4) I just added a field to a structure. What else should I do?
 | |
|   
 | |
|    The structures passing around from the parser, rewrite, optimizer, and
 | |
|    executor require quite a bit of support. Most structures have support
 | |
|    routines in src/backend/nodes used to create, copy, read, and output
 | |
|    those structures. Make sure you add support for your new field to
 | |
|    these files. Find any other places the structure may need code for
 | |
|    your new field. mkid is helpful with this (see above).
 | |
|    
 | |
|   2.5) Why do we use palloc() and pfree() to allocate memory?
 | |
|   
 | |
|    palloc() and pfree() are used in place of malloc() and free() because
 | |
|    we find it easier to automatically free all memory allocated when a
 | |
|    query completes. This assures us that all memory that was allocated
 | |
|    gets freed even if we have lost track of where we allocated it. There
 | |
|    are special non-query contexts that memory can be allocated in. These
 | |
|    affect when the allocated memory is freed by the backend.
 | |
|    
 | |
|   2.6) What is elog()?
 | |
|   
 | |
|    elog() is used to send messages to the front-end, and optionally
 | |
|    terminate the current query being processed. The first parameter is an
 | |
|    elog level of DEBUG (levels 1-5), LOG, INFO, NOTICE, ERROR, FATAL, or
 | |
|    PANIC. NOTICE prints on the user's terminal and the postmaster logs.
 | |
|    INFO prints only to the user's terminal and LOG prints only to the
 | |
|    server logs. (These can be changed from postgresql.conf.) ERROR prints
 | |
|    in both places, and terminates the current query, never returning from
 | |
|    the call. FATAL terminates the backend process. The remaining
 | |
|    parameters of elog are a printf-style set of parameters to print.
 | |
|    
 | |
|    elog(ERROR) frees most memory and open file descriptors so you don't
 | |
|    need to clean these up before the call.
 | |
|    
 | |
|   2.7) What is CommandCounterIncrement()?
 | |
|   
 | |
|    Normally, transactions can not see the rows they modify. This allows
 | |
|    UPDATE foo SET x = x + 1 to work correctly.
 | |
|    
 | |
|    However, there are cases where a transactions needs to see rows
 | |
|    affected in previous parts of the transaction. This is accomplished
 | |
|    using a Command Counter. Incrementing the counter allows transactions
 | |
|    to be broken into pieces so each piece can see rows modified by
 | |
|    previous pieces. CommandCounterIncrement() increments the Command
 | |
|    Counter, creating a new part of the transaction.
 |