diff --git a/doc/FAQ_DEV b/doc/FAQ_DEV
index 9481c36b389..71439a22e33 100644
--- a/doc/FAQ_DEV
+++ b/doc/FAQ_DEV
@@ -28,6 +28,7 @@
12) How do I add a new port?
13) What is CommandCounterIncrement()?
14) Why don't we use threads in the backend?
+ 15) How are RPM's packaged?
_________________________________________________________________
1) What tools are available for developers?
@@ -41,7 +42,8 @@
ccsym find standard defines made by your compiler
entab converts tabs to spaces, used by pgindent
find_static finds functions that could be made static
- find_typedef get a list of typedefs in the source code
+ find_typedef finds a list of typedefs in the source code
+ find_badmacros finds macros that use braces incorrectly
make_ctags make vi 'tags' file in each directory
make_diff make *.orig and diffs of source
make_etags make emacs 'etags' files
@@ -49,6 +51,7 @@
make_mkid make mkid ID files
mkldexport create AIX exports file
pgindent indents C source files
+ pgjindent indents Java source files
pginclude scripts for adding/removing include files
unused_oids in pgsql/src/include/catalog
@@ -127,8 +130,11 @@
It auto-formats all source files to make them consistent. Comment
blocks that need specific line breaks should be formatted as block
comments, where the comment starts as /*------. These comments will
- not be reformatted in any way. pginclude contains scripts used to add
- needed #include's to include files, and removed unneeded #include's.
+ not be reformatted in any way.
+
+ pginclude contains scripts used to add needed #include's to include
+ files, and removed unneeded #include's.
+
When adding system types, you will need to assign oids to them. There
is also a script called unused_oids in pgsql/src/include/catalog that
shows the unused oids.
@@ -434,3 +440,93 @@ typedef struct nameData
* Speed improvements using threads are small compared to the
remaining backend startup time.
* The backend code would be more complex.
+
+ 15) How are RPM's packaged?
+
+ This is from Lamar Owen:
+As to how the RPMs are built -- to answer that question sanely requires
+me to know how much experience you have with the whole RPM paradigm.
+'How is the RPM built?' is a multifaceted question. The obvious simple
+answer is that I maintain:
+ 1.) A set of patches to make certain portions of the source
+ tree 'behave' in the different environment of the RPMset;
+ 2.) The initscript;
+ 3.) Any other ancilliary scripts and files;
+ 4.) A README.rpm-dist document that tries to adequately document
+ both the differences between the RPM build and the WHY of the
+ differences, as well as useful RPM environment operations
+ (like, using syslog, upgrading, getting postmaster to
+ start at OS boot, etc);
+ 5.) The spec file that throws it all together. This is not a
+ trivial undertaking in a package of this size.
+
+I then download and build on as many different canonical distributions
+as I can -- currently I am able to build on Red Hat 6.2, 7.0, and 7.1 on
+my personal hardware. Occasionally I receive opportunity from certain
+commercial enterprises such as Great Bridge and PostgreSQL Inc to build
+on other distributions.
+
+I test the build by installing the resulting packages and running the
+regression tests. Once the build passes these tests, I upload to the
+postgresql.org ftp server and make a release announcement. I am also
+responsible for maintaining the RPM download area on the ftp site.
+
+You'll notice I said 'canonical' distributions above. That simply means
+that the machine is as stock 'out of the box' as practical -- that is,
+everything (except select few programs) on these boxen are installed by
+RPM; only official Red Hat released RPMs are used (except in unusual
+circumstances involving software that will not alter the build -- for
+example, installing a newer non-RedHat version of the Dia diagramming
+package is OK -- installing Python 2.1 on the box that has Python 1.5.2
+installed is not, as that alters the PostgreSQL build). The RPM as
+uploaded is built to as close to out-of-the-box pristine as is
+possible. Only the standard released 'official to that release'
+compiler is used -- and only the standard official kernel is used as
+well.
+
+For a time I built on Mandrake for RedHat consumption -- no more.
+Nonstandard RPM building systems are worse than useless. Which is not
+to say that Mandrake is useless! By no means is Mandrake useless --
+unless you are building Red Hat RPMs -- and Red Hat is useless if you're
+trying to build Mandrake or SuSE RPMs, for that matter. But I would be
+foolish to use 'Lamar Owen's Super Special RPM Blend Distro 0.1.2' to
+build for public consumption! :-)
+
+I _do_ attempt to make the _source_ RPM compatible with as many
+distributions as possible -- however, since I have limited resources (as
+a volunteer RPM maintainer) I am limited as to the amount of testing
+said build will get on other distributions, architectures, or systems.
+
+And, while I understand people's desire to immediately upgrade to the
+newest version, realize that I do this as a side interest -- I have a
+regular, full-time job as a broadcast
+engineer/webmaster/sysadmin/Technical Director which occasionally
+prevents me from making timely RPM releases. This happened during the
+early part of the 7.1 beta cycle -- but I believe I was pretty much on
+the ball for the Release Candidates and the final release.
+
+I am working towards a more open RPM distribution -- I would dearly love
+to more fully document the process and put everything into CVS -- once I
+figure out how I want to represent things such as the spec file in a CVS
+form. It makes no sense to maintain a changelog, for instance, in the
+spec file in CVS when CVS does a better job of changelogs -- I will need
+to write a tool to generate a real spec file from a CVS spec-source file
+that would add version numbers, changelog entries, etc to the result
+before building the RPM. IOW, I need to rethink the process -- and then
+go through the motions of putting my long RPM history into CVS one
+version at a time so that version history information isn't lost.
+
+As to why all these files aren't part of the source tree, well, unless
+there was a large cry for it to happen, I don't believe it should.
+PostgreSQL is very platform-agnostic -- and I like that. Including the
+RPM stuff as part of the Official Tarball (TM) would, IMHO, slant that
+agnostic stance in a negative way. But maybe I'm too sensitive to
+that. I'm not opposed to doing that if that is the consensus of the
+core group -- and that would be a sneaky way to get the stuff into CVS
+:-). But if the core group isn't thrilled with the idea (and my
+instinct says they're not likely to be), I am opposed to the idea -- not
+to keep the stuff to myself, but to not hinder the platform-neutral
+stance. IMHO, of course.
+
+Of course, there are many projects that DO include all the files
+necessary to build RPMs from their Official Tarball (TM).
diff --git a/doc/src/FAQ/FAQ_DEV.html b/doc/src/FAQ/FAQ_DEV.html
index f3519893b6c..149afb6eb65 100644
--- a/doc/src/FAQ/FAQ_DEV.html
+++ b/doc/src/FAQ/FAQ_DEV.html
@@ -52,6 +52,7 @@
13) What is CommandCounterIncrement()?
14) Why don't we use threads in the backend?
15) How are RPM's packaged?
+ 16) How are CVS branches handled?
- RELEASE_CHANGES changes we have to make for each release - SQL_keywords standard SQL'92 keywords + RELEASE_CHANGES changes we have to make for each release + SQL_keywords standard SQL'92 keywords backend description/flowchart of the backend directories ccsym find standard defines made by your compiler entab converts tabs to spaces, used by pgindent find_static finds functions that could be made static - find_typedef finds a list of typedefs in the source code + find_typedef finds typedefs in the source code find_badmacros finds macros that use braces incorrectly make_ctags make vi 'tags' file in each directory make_diff make *.orig and diffs of source make_etags make emacs 'etags' files - make_keywords.README make comparison of our keywords and SQL'92 + make_keywords make comparison of our keywords and SQL'92 make_mkid make mkid ID files mkldexport create AIX exports file pgindent indents C source files @@ -634,6 +635,107 @@ Of course, there are many projects that DO include all the files necessary to build RPMs from their Official Tarball (TM).+
This was written by Tom Lane: + +
+If you just do basic "cvs checkout", "cvs update", "cvs commit", then +you'll always be dealing with the HEAD version of the files in CVS. +That's what you want for development, but if you need to patch past +stable releases then you have to be able to access and update the +"branch" portions of our CVS repository. We normally fork off a branch +for a stable release just before starting the development cycle for the +next release. + +The first thing you have to know is the branch name for the branch you +are interested in getting at. Unfortunately Marc has been less than +100% consistent in naming the things. One way to check is to apply +"cvs log" to any file that goes back a long time, for example HISTORY +in the top directory: + +$ cvs log HISTORY | more + +RCS file: /home/projects/pgsql/cvsroot/pgsql/HISTORY,v +Working file: HISTORY +head: 1.106 +branch: +locks: strict +access list: +symbolic names: + REL7_1_STABLE: 1.106.0.2 + REL7_1_BETA: 1.79 + REL7_1_BETA3: 1.86 + REL7_1_BETA2: 1.86 + REL7_1: 1.102 + REL7_0_PATCHES: 1.70.0.2 + REL7_0: 1.70 + REL6_5_PATCHES: 1.52.0.2 + REL6_5: 1.52 + REL6_4: 1.44.0.2 + release-6-3: 1.33 + SUPPORT: 1.1.1.1 + PG95-DIST: 1.1.1 +keyword substitution: kv +total revisions: 129; selected revisions: 129 +More---q + +Unfortunately "cvs log" isn't all that great about distinguishing +branches from tags --- it calls 'em all "symbolic names". (A "tag" just +marks a specific timepoint across all files --- it's essentially a +snapshot whereas a branch is a changeable fileset.) Rule of thumb is +that names attached to four-number versions where the third number is +zero represent branches, the others are just tags. Here we can see that +the extant branches are + REL7_1_STABLE + REL7_0_PATCHES + REL6_5_PATCHES +The next commit to the head will be revision 1.107, whereas any changes +committed into the REL7_1_STABLE branch will have revision numbers like +1.106.2.*, corresponding to the branch number 1.106.0.2 (don't ask where +the zero went...). + +OK, so how do you do work on a branch? By far the best way is to create +a separate checkout tree for the branch and do your work in that. Not +only is that the easiest way to deal with CVS, but you really need to +have the whole past tree available anyway to test your work. (And you +*better* test your work. Never forget that dot-releases tend to go out +with very little beta testing --- so whenever you commit an update to a +stable branch, you'd better be doubly sure that it's correct.) + +Normally, to checkout the head branch, you just cd to the place you +want to contain the toplevel "pgsql" directory and say + + cvs ... checkout pgsql + +To get a past branch, you cd to whereever you want it and say + + cvs ... checkout -r BRANCHNAME pgsql + +For example, just a couple days ago I did + + mkdir ~postgres/REL7_1 + cd ~postgres/REL7_1 + cvs ... checkout -r REL7_1_STABLE pgsql + +and now I have a maintenance copy of 7.1.*. + +When you've done a checkout in this way, the branch name is "sticky": +CVS automatically knows that this directory tree is for the branch, +and whenever you do "cvs update" or "cvs commit" in this tree, you'll +fetch or store the latest version in the branch, not the head version. +Easy as can be. + +So, if you have a patch that needs to apply to both the head and a +recent stable branch, you have to make the edits and do the commit +twice, once in your development tree and once in your stable branch +tree. This is kind of a pain, which is why we don't normally fork +the tree right away after a major release --- we wait for a dot-release +or two, so that we won't have to double-patch the first wave of fixes. ++ +
Also, Ian Lance Taylor points out that branches and tags can be +distiguished by using "cvs status -v".
+