From 36ec29d5314949bec18073758b497574514f4cd0 Mon Sep 17 00:00:00 2001
From: Rich Bowen
+
+ A few notes on general pedagogical style here. In the
+ interest of conciseness, all structure declarations here are
+ incomplete --- the real ones have more slots that I'm not
+ telling you about. For the most part, these are reserved to one
+ component of the server core or another, and should be altered
+ by modules with caution. However, in some cases, they really
+ are things I just haven't gotten around to yet. Welcome to the
+ bleeding edge. Finally, here's an outline, to give you some bare idea of
+ what's coming up, and in what order:
+
+
+
+
+
+ The handlers themselves are functions of one argument (a
+ Let's begin with handlers. In order to handle the CGI
+ scripts, the module declares a response handler for them.
+ Because of The module needs to maintain some per (virtual) server
+ information, namely, the Finally, this module contains code to handle the
+ A final note on the declared types of the arguments of some
+ of these commands: a
+ The most important such information is a small set of
+ character strings describing attributes of the object being
+ requested, including its URI, filename, content-type and
+ content-encoding (these being filled in by the translation and
+ type-check handlers which handle the request,
+ respectively).
+ Other commonly used data items are tables giving the MIME
+ headers on the client's original request, MIME headers to be
+ sent back with the response (which modules can add to at will),
+ and environment variables for any subprocesses which are
+ spawned off in the course of servicing the request. These
+ tables are manipulated using the
-
-Other commonly used data items are tables giving the MIME headers on
-the client's original request, MIME headers to be sent back with the
-response (which modules can add to at will), and environment variables
-for any subprocesses which are spawned off in the course of servicing
-the request. These tables are manipulated using the
-
-
+
-
- Here is an abridged declaration, giving the fields most
+ commonly used:
+
+
+ Such handlers can construct a sub-request,
+ using the functions
+ (Server-side includes work by building sub-requests and
+ then actually invoking the response handler for them, via
+ the function
+ They should begin by sending an HTTP response header, using
+ the function
+ Otherwise, they should produce a request body which responds
+ to the client as appropriate. The primitives for this are
+
-
-Otherwise, they should produce a request body which responds to the
-client as appropriate. The primitives for this are
-
-At this point, you should more or less understand the following piece
-of code, which is the handler which handles
-
- At this point, you should more or less understand the
+ following piece of code, which is the handler which handles
+
+ (Invoking
-One of the problems of writing and designing a server-pool server is
-that of preventing leakage, that is, allocating resources (memory,
-open files, etc.), without subsequently releasing them. The resource
-pool machinery is designed to make it easy to prevent this from
-happening, by allowing resource to be allocated in such a way that
-they are automatically released when the server is done with
-them.
-
-The way this works is as follows: the memory which is allocated, file
-opened, etc., to deal with a particular request are tied to a
-resource pool which is allocated for the request. The pool
-is a data structure which itself tracks the resources in question.
-
-When the request has been processed, the pool is cleared. At
-that point, all the memory associated with it is released for reuse,
-all files associated with it are closed, and any other clean-up
-functions which are associated with the pool are run. When this is
-over, we can be confident that all the resource tied to the pool have
-been released, and that none of them have leaked.
-
-Server restarts, and allocation of memory and resources for per-server
-configuration, are handled in a similar way. There is a
-configuration pool, which keeps track of resources which were
-allocated while reading the server configuration files, and handling
-the commands therein (for instance, the memory that was allocated for
-per-server module configuration, log files and other files that were
-opened, and so forth). When the server restarts, and has to reread
-the configuration files, the configuration pool is cleared, and so the
-memory and file descriptors which were taken up by reading them the
-last time are made available for reuse.
-
-It should be noted that use of the pool machinery isn't generally
-obligatory, except for situations like logging handlers, where you
-really need to register cleanups to make sure that the log file gets
-closed when the server restarts (this is most easily done by using the
-function
-We begin here by describing how memory is allocated to pools, and then
-discuss how other resources are tracked by the resource pool
-machinery.
-
-Memory is allocated to pools by calling the function
- One of the problems of writing and designing a server-pool
+ server is that of preventing leakage, that is, allocating
+ resources (memory, open files, etc.), without
+ subsequently releasing them. The resource pool machinery is
+ designed to make it easy to prevent this from happening, by
+ allowing resource to be allocated in such a way that they are
+ automatically released when the server is done with
+ them. The way this works is as follows: the memory which is
+ allocated, file opened, etc., to deal with a
+ particular request are tied to a resource pool which
+ is allocated for the request. The pool is a data structure
+ which itself tracks the resources in question. When the request has been processed, the pool is
+ cleared. At that point, all the memory associated with
+ it is released for reuse, all files associated with it are
+ closed, and any other clean-up functions which are associated
+ with the pool are run. When this is over, we can be confident
+ that all the resource tied to the pool have been released, and
+ that none of them have leaked. Server restarts, and allocation of memory and resources for
+ per-server configuration, are handled in a similar way. There
+ is a configuration pool, which keeps track of
+ resources which were allocated while reading the server
+ configuration files, and handling the commands therein (for
+ instance, the memory that was allocated for per-server module
+ configuration, log files and other files that were opened, and
+ so forth). When the server restarts, and has to reread the
+ configuration files, the configuration pool is cleared, and so
+ the memory and file descriptors which were taken up by reading
+ them the last time are made available for reuse. It should be noted that use of the pool machinery isn't
+ generally obligatory, except for situations like logging
+ handlers, where you really need to register cleanups to make
+ sure that the log file gets closed when the server restarts
+ (this is most easily done by using the function We begin here by describing how memory is allocated to
+ pools, and then discuss how other resources are tracked by the
+ resource pool machinery. Memory is allocated to pools by calling the function
+
-Note that there is no
-(It also raises the possibility that heavy use of
-There are functions which allocate initialized memory, and are
-frequently useful. The function Note that there is no (It also raises the possibility that heavy use of
+ There are functions which allocate initialized memory, and
+ are frequently useful. The function
-returns a pointer to 8 bytes worth of memory, initialized to
-
-A pool is really defined by its lifetime more than anything else. There
-are some static pools in http_main which are passed to various
-non-http_main functions as arguments at opportune times. Here they are:
-
-For almost everything folks do, r->pool is the pool to use. But you
-can see how other lifetimes, such as pchild, are useful to some
-modules... such as modules that need to open a database connection once
-per child, and wish to clean it up when the child dies.
-
-You can also see how some bugs have manifested themself, such as setting
-connection->user to a value from r->pool -- in this case
-connection exists
-for the lifetime of ptrans, which is longer than r->pool (especially if
-r->pool is a subrequest!). So the correct thing to do is to allocate
-from connection->pool.
-
-And there was another interesting bug in mod_include/mod_cgi. You'll see
-in those that they do this test to decide if they should use r->pool
-or r->main->pool. In this case the resource that they are registering
-for cleanup is a child process. If it were registered in r->pool,
-then the code would wait() for the child when the subrequest finishes.
-With mod_include this could be any old #include, and the delay can be up
-to 3 seconds... and happened quite frequently. Instead the subprocess
-is registered in r->main->pool which causes it to be cleaned up when
-the entire request is done -- i.e., after the output has been sent to
-the client and logging has happened.
-
-As indicated above, resource pools are also used to track other sorts
-of resources besides memory. The most common are open files. The
-routine which is typically used for this is returns a pointer to 8 bytes worth of memory, initialized to
+ A pool is really defined by its lifetime more than anything
+ else. There are some static pools in http_main which are passed
+ to various non-http_main functions as arguments at opportune
+ times. Here they are: For almost everything folks do, r->pool is the pool to
+ use. But you can see how other lifetimes, such as pchild, are
+ useful to some modules... such as modules that need to open a
+ database connection once per child, and wish to clean it up
+ when the child dies. You can also see how some bugs have manifested themself,
+ such as setting connection->user to a value from r->pool
+ -- in this case connection exists for the lifetime of ptrans,
+ which is longer than r->pool (especially if r->pool is a
+ subrequest!). So the correct thing to do is to allocate from
+ connection->pool. And there was another interesting bug in
+ mod_include/mod_cgi. You'll see in those that they do this test
+ to decide if they should use r->pool or r->main->pool.
+ In this case the resource that they are registering for cleanup
+ is a child process. If it were registered in r->pool, then
+ the code would wait() for the child when the subrequest
+ finishes. With mod_include this could be any old #include, and
+ the delay can be up to 3 seconds... and happened quite
+ frequently. Instead the subprocess is registered in
+ r->main->pool which causes it to be cleaned up when the
+ entire request is done -- i.e., after the output has
+ been sent to the client and logging has happened. As indicated above, resource pools are also used to track
+ other sorts of resources besides memory. The most common are
+ open files. The routine which is typically used for this is
+
-There is also a
-Unlike the case for memory, there are functions to close
-files allocated with
-(Using the
-Pool cleanups live until clear_pool() is called: clear_pool(a) recursively
-calls destroy_pool() on all subpools of a; then calls all the cleanups for a;
-then releases all the memory for a. destroy_pool(a) calls clear_pool(a)
-and then releases the pool structure itself. i.e., clear_pool(a) doesn't
-delete a, it just frees up all the resources and you can start using it
-again immediately.
-
+ There is also a
+ Unlike the case for memory, there are functions to
+ close files allocated with
+ (Using the
+ Pool cleanups live until clear_pool() is called:
+ clear_pool(a) recursively calls destroy_pool() on all subpools
+ of a; then calls all the cleanups for a; then releases all the
+ memory for a. destroy_pool(a) calls clear_pool(a) and then
+ releases the pool structure itself. i.e.,
+ clear_pool(a) doesn't delete a, it just frees up all the
+ resources and you can start using it again immediately.
+
+ The primitive for creating a sub-pool is
+
+ One final note --- sub-requests have their own resource
+ pools, which are sub-pools of the resource pool for the main
+ request. The polite way to reclaim the resources associated
+ with a sub request which you have allocated (using the
+ (Again, under most circumstances, you shouldn't feel obliged
+ to call this function; only 2K of memory or so are allocated
+ for a typical sub request, and it will be freed anyway when the
+ main request pool is cleared. It is only when you are
+ allocating many, many sub-requests for a single main request
+ that you should seriously consider the
+ However, just giving the modules command tables is not
+ enough to divorce them completely from the server core. The
+ server has to remember the commands in order to act on them
+ later. That involves maintaining data which is private to the
+ modules, and which can be either per-server, or per-directory.
+ Most things are per-directory, including in particular access
+ control and authorization information, but also information on
+ how to determine file types from suffixes, which can be
+ modified by Another requirement for emulating the NCSA server is being
+ able to handle the per-directory configuration files, generally
+ called Finally, after having served a request which involved
+ reading
+ (If we are reading a
-
-For the MIME module, the per-dir config creation function just
- For the MIME module, the per-dir config creation function
+ just
-
-To do that, the server invokes the module's per-directory config merge
-function, if one is present. That function takes three arguments:
-the two structures being merged, and a resource pool in which to
-allocate the result. For the MIME module, all that needs to be done
-is overlay the tables from the new per-directory config structure with
-those from the parent:
-
- To do that, the server invokes the module's per-directory
+ config merge function, if one is present. That function takes
+ three arguments: the two structures being merged, and a
+ resource pool in which to allocate the result. For the MIME
+ module, all that needs to be done is overlay the tables from
+ the new per-directory config structure with those from the
+ parent:
-
-
+ Another way in which this particular command handler is
+ unusually simple is that there are no error conditions which it
+ can encounter. If there were, it could return an error message
+ instead of
-
-The MIME module's command table has entries for these commands, which
-look like this:
-
- The MIME module's command table has entries for these
+ commands, which look like this:
-
-The only substantial difference is that when a command needs to
-configure the per-server private module data, it needs to go to the
- The only substantial difference is that when a command needs
+ to configure the per-server private module data, it needs to go
+ to the The allocation mechanism's within APR have a number of debugging
-modes that can be used to assist in finding memory problems. This document describes
-the modes available and gives instructions on activating them. The allocation mechanism's within APR have a number of
+ debugging modes that can be used to assist in finding memory
+ problems. This document describes the modes available and gives
+ instructions on activating them. Debugging support: Define this to enable code which helps detect re-use of freed memory and other such nonsense. The theory is simple. The FILL_BYTE (0xa5) is written over all malloc'd memory as we receive it, and is written over everything that we free up during a clear_pool. We check that blocks on the free list always have the FILL_BYTE in them, and we check during palloc() that the bytes still have FILL_BYTE in them. If you ever see garbage URLs or whatnot containing lots of 0xa5s then you know something used data that's been freed or uninitialized. If defined all allocations will be done with malloc and free()d appropriately at the end.
- This is intended to be used with something like Electric Fence or Purify to help detect memory problems. Note that if you're using efence then you should also add in ALLOC_DEBUG. But don't add in ALLOC_DEBUG if you're using Purify because ALLOC_DEBUG would hide all the uninitialized read errors that Purify can diagnose. This is intended to detect cases where the wrong pool is used when assigning data to an object in another pool. Debugging support: Define this to enable code which
+ helps detect re-use of freed memory and other such
+ nonsense. In particular, it causes the table_{set,add,merge}n routines to check that their arguments are safe for the apr_table_t they're being placed in. It currently only works with the unix multiprocess model, but could be extended to others. The theory is simple. The FILL_BYTE (0xa5) is written over
+ all malloc'd memory as we receive it, and is written over
+ everything that we free up during a clear_pool. We check that
+ blocks on the free list always have the FILL_BYTE in them, and
+ we check during palloc() that the bytes still have FILL_BYTE in
+ them. If you ever see garbage URLs or whatnot containing lots
+ of 0xa5s then you know something used data that's been freed or
+ uninitialized. Provide diagnostic information about make_table() calls which are possibly too small. This requires a recent gcc which supports __builtin_return_address(). The error_log output will be a message such as: Use "l *0x804d874" to find the source that corresponds to. It
- indicates that a apr_table_t allocated by a call at that address has possibly too small an initial apr_table_t size guess. Provide some statistics on the cost of allocations. If defined all allocations will be done with malloc and
+ free()d appropriately at the end. This requires a bit of an understanding of how alloc.c works. This is intended to be used with something like Electric
+ Fence or Purify to help detect memory problems. Note that if
+ you're using efence then you should also add in ALLOC_DEBUG.
+ But don't add in ALLOC_DEBUG if you're using Purify because
+ ALLOC_DEBUG would hide all the uninitialized read errors that
+ Purify can diagnose. Not all the options outlined above can be activated at the same time. the following table gives more information. This is intended to detect cases where the wrong pool is
+ used when assigning data to an object in another pool.
- In particular, it causes the table_{set,add,merge}n routines
+ to check that their arguments are safe for the apr_table_t
+ they're being placed in. It currently only works with the unix
+ multiprocess model, but could be extended to others. Additionally the debugging options are not suitable for multi-threaded versions of the server. When trying to debug with these options the server should be started in single process mode. Provide diagnostic information about make_table() calls
+ which are possibly too small. The various options for debugging memory are now enabled in the apr_general.h header file in APR. The various options are enabled by uncommenting the define for the option you wish to use. The section of the code currently looks like this (contained in srclib/apr/include/apr_pools.h) This requires a recent gcc which supports
+ __builtin_return_address(). The error_log output will be a
+ message such as: Use "l *0x804d874" to find the
+ source that corresponds to. It indicates that a apr_table_t
+ allocated by a call at that address has possibly too small an
+ initial apr_table_t size guess. Provide some statistics on the cost of
+ allocations. This requires a bit of an understanding of how alloc.c
+ works. Not all the options outlined above can be activated at the
+ same time. the following table gives more information. Additionally the debugging options are not suitable for
+ multi-threaded versions of the server. When trying to debug
+ with these options the server should be started in single
+ process mode. The various options for debugging memory are now enabled in
+ the apr_general.h header file in APR. The various options are
+ enabled by uncommenting the define for the option you wish to
+ use. The section of the code currently looks like this
+ (contained in srclib/apr/include/apr_pools.h) To enable allocation debugging simply move the #define ALLOC_DEBUG above the start of the comments block and rebuild the server. To enable allocation debugging simply move the #define
+ ALLOC_DEBUG above the start of the comments block and rebuild
+ the server. Apache 2.0 uses DoxyGen to document the API's and global variables in the
- the code. This will explain the basics of how to document using DoxyGen.
+ To start a documentation block, use /** In the middle of the block, there are multiple tags we can use: Apache 2.0 uses DoxyGen to document the API's and global
+ variables in the the code. This will explain the basics of how
+ to document using DoxyGen. To start a documentation block, use /** In the middle of the block, there are multiple tags we can
+ use:Warning:
-This document has not been updated to take into account changes
-made in the 2.0 version of the Apache HTTP Server. Some of the
-information may still be relevant, but please use it
-with care.
-
+
+
+
-Apache API notes
+
+ Warning: This document has not been updated
+ to take into account changes made in the 2.0 version of the
+ Apache HTTP Server. Some of the information may still be
+ relevant, but please use it with care.
+
-Finally, here's an outline, to give you some bare idea of what's
-coming up, and in what order:
+ Apache API notes
+ These are some notes on the Apache API and the data structures
+ you have to deal with, etc. They are not yet nearly
+ complete, but hopefully, they will help you get your bearings.
+ Keep in mind that the API is still subject to change as we gain
+ experience with it. (See the TODO file for what might
+ be coming). However, it will be easy to adapt modules to any
+ changes that are made. (We have more modules to adapt than you
+ do).
-
-
+ Basic concepts.
+
+
-Handlers, Modules, and Requests
+
+
+
-
+ SetEnv, which don't really fit well elsewhere.
-
+
+ request_rec
-
+ OK.
- DECLINED. In this case, the
- server behaves in all respects as if the handler simply hadn't
- been there.
- */* (i.e., a
-wildcard MIME type specification). However, wildcard handlers are
-only invoked if the server has already tried and failed to find a more
-specific response handler for the MIME type of the requested object
-(either none existed, or they all declined).request_rec structure. vide infra), which returns an
-integer, as above.A brief tour of a module
+ ScriptAlias config file
-command. It's actually a great deal more complicated than most
-modules, but if we're going to have only one example, it might as well
-be the one with its fingers in every place.ScriptAlias, it also has handlers for the name
-translation phase (to recognize ScriptAliased URIs), the
-type-checking phase (any ScriptAliased request is typed
-as a CGI script).ScriptAliases in effect;
-the module structure therefore contains pointers to a functions which
-builds these structures, and to another which combines two of them (in
-case the main server and a virtual server both have
-ScriptAliases declared).ScriptAlias command itself. This particular module only
-declares one command, but there could be more, so modules have
-command tables which declare their commands, and describe
-where they are permitted, and how they are to be invoked.
+
+ pool is a pointer to a resource pool
-structure; these are used by the server to keep track of the memory
-which has been allocated, files opened, etc., either to service a
-particular request, or to handle the process of configuring itself.
-That way, when the request is over (or, for the configuration pool,
-when the server is restarting), the memory can be freed, and the files
-closed, en masse, without anyone having to write explicit code to
-track them all down and dispose of them. Also, a
-cmd_parms structure contains various information about
-the config file being read, and other status information, which is
-sometimes of use to the function which processes a config-file command
-(such as ScriptAlias).
+
+
Basic concepts.
+ We begin with an overview of the basic concepts behind the API,
+ and how they are manifested in the code.
+
+ Handlers, Modules, and
+ Requests
+ Apache breaks down request handling into a series of steps,
+ more or less the same way the Netscape server API does
+ (although this API has a few more stages than NetSite does, as
+ hooks for stuff I thought might be useful in the future). These
+ are:
+
+
+
+ These phases are handled by looking at each of a succession of
+ modules, looking to see if each of them has a handler
+ for the phase, and attempting invoking it if so. The handler
+ can typically do one of three things:
+
+ SetEnv, which don't really fit well
+ elsewhere.
+
+ Most phases are terminated by the first module that handles
+ them; however, for logging, `fixups', and non-access
+ authentication checking, all handlers always run (barring an
+ error). Also, the response phase is unique in that modules may
+ declare multiple handlers for it, via a dispatch table keyed on
+ the MIME type of the requested object. Modules may declare a
+ response-phase handler which can handle any request,
+ by giving it the key OK.DECLINED. In this case,
+ the server behaves in all respects as if the handler simply
+ hadn't been there.*/* (i.e., a
+ wildcard MIME type specification). However, wildcard handlers
+ are only invoked if the server has already tried and failed to
+ find a more specific response handler for the MIME type of the
+ requested object (either none existed, or they all declined).
+
+ request_rec structure. vide infra), which returns
+ an integer, as above.A brief tour of a
+ module
+ At this point, we need to explain the structure of a module.
+ Our candidate will be one of the messier ones, the CGI module
+ --- this handles both CGI scripts and the
+ ScriptAlias config file command. It's actually a
+ great deal more complicated than most modules, but if we're
+ going to have only one example, it might as well be the one
+ with its fingers in every place.
+
+ ScriptAlias, it also has handlers for
+ the name translation phase (to recognize
+ ScriptAliased URIs), the type-checking phase (any
+ ScriptAliased request is typed as a CGI
+ script).ScriptAliases in effect;
+ the module structure therefore contains pointers to a functions
+ which builds these structures, and to another which combines
+ two of them (in case the main server and a virtual server both
+ have ScriptAliases declared).ScriptAlias command itself. This particular module
+ only declares one command, but there could be more, so modules
+ have command tables which declare their commands, and
+ describe where they are permitted, and how they are to be
+ invoked.pool is a pointer to a
+ resource pool structure; these are used by the server
+ to keep track of the memory which has been allocated, files
+ opened, etc., either to service a particular request,
+ or to handle the process of configuring itself. That way, when
+ the request is over (or, for the configuration pool, when the
+ server is restarting), the memory can be freed, and the files
+ closed, en masse, without anyone having to write
+ explicit code to track them all down and dispose of them. Also,
+ a cmd_parms structure contains various information
+ about the config file being read, and other status information,
+ which is sometimes of use to the function which processes a
+ config-file command (such as ScriptAlias). With no
+ further ado, the module itself:
/* Declarations of handlers. */
int translate_scriptalias (request_rec *);
@@ -224,59 +266,65 @@ module cgi_module = {
NULL, /* logger */
NULL /* header parser */
};
-
+
-How handlers work
+ How handlers work
+ The sole argument to handlers is a request_rec
+ structure. This structure describes a particular request which
+ has been made to the server, on behalf of a client. In most
+ cases, each connection to the client generates only one
+ request_rec structure.
-The sole argument to handlers is a request_rec structure.
-This structure describes a particular request which has been made to
-the server, on behalf of a client. In most cases, each connection to
-the client generates only one request_rec structure.A brief tour of the
+
+ The request_recrequest_rec contains pointers to a resource
+ pool which will be cleared when the server is finished handling
+ the request; to structures containing per-server and
+ per-connection information, and most importantly, information
+ on the request itself.
-A brief tour of the
+ request_recrequest_rec contains pointers to a resource pool
-which will be cleared when the server is finished handling the
-request; to structures containing per-server and per-connection
-information, and most importantly, information on the request itself.ap_table_get and
+ ap_table_set routines.ap_table_get and ap_table_set routines.
- Note that the Content-type header value cannot be
- set by module content-handlers using the ap_table_*()
- routines. Rather, it is set by pointing the content_type
- field in the request_rec structure to an appropriate
- string. E.g.,
-
+ Finally, there are pointers to two data structures which, in
+ turn, point to per-module configuration structures.
+ Specifically, these hold pointers to the data structures which
+ the module has built to describe the way it has been configured
+ to operate in a given directory (via
+
+
+ Note that the Content-type header value
+ cannot be set by module content-handlers using the
+ ap_table_*() routines. Rather, it is set by
+ pointing the content_type field in the
+ request_rec structure to an appropriate string.
+ E.g.,
+
-Finally, there are pointers to two data structures which, in turn,
-point to per-module configuration structures. Specifically, these
-hold pointers to the data structures which the module has built to
-describe the way it has been configured to operate in a given
-directory (via
r->content_type = "text/html";
-
-.htaccess files or
-<Directory> sections), for private data it has
-built in the course of servicing the request (so modules' handlers for
-one phase can pass `notes' to their handlers for other phases). There
-is another such configuration vector in the server_rec
-data structure pointed to by the request_rec, which
-contains per (virtual) server configuration data..htaccess
+ files or <Directory> sections), for private
+ data it has built in the course of servicing the request (so
+ modules' handlers for one phase can pass `notes' to their
+ handlers for other phases). There is another such configuration
+ vector in the server_rec data structure pointed to
+ by the request_rec, which contains per (virtual)
+ server configuration data.
-Here is an abridged declaration, giving the fields most commonly used:
+
-
struct request_rec {
pool *pool;
@@ -314,8 +362,8 @@ struct request_rec {
int header_only; /* HEAD request, as opposed to GET */
char *protocol; /* Protocol, as given to us, or HTTP/0.9 */
- char *method; /* GET, HEAD, POST, etc. */
- int method_number; /* M_GET, M_POST, etc. */
+ char *method; /* GET, HEAD, POST, etc. */
+ int method_number; /* M_GET, M_POST, etc. */
/* Info for logging */
@@ -333,109 +381,115 @@ struct request_rec {
* (the thing pointed to being the module's business).
*/
- void *per_dir_config; /* Options set in config files, etc. */
+ void *per_dir_config; /* Options set in config files, etc. */
void *request_config; /* Notes on *this* request */
};
-
+Where request_rec structures come from
+ Where request_rec
+ structures come from
+ Most request_rec structures are built by reading
+ an HTTP request from a client, and filling in the fields.
+ However, there are a few exceptions:
-Most request_rec structures are built by reading an HTTP
-request from a client, and filling in the fields. However, there are
-a few exceptions:
+
+
+ *.var file), or a CGI script
+ which returned a local `Location:', then the resource which
+ the user requested is going to be ultimately located by some
+ URI other than what the client originally supplied. In this
+ case, the server does an internal redirect,
+ constructing a new request_rec for the new URI,
+ and processing it almost exactly as if the client had
+ requested the new URI directly.
-
- (Server-side includes work by building sub-requests and then
- actually invoking the response handler for them, via the
- function *.var file), or a CGI script which returned a
- local `Location:', then the resource which the user requested
- is going to be ultimately located by some URI other than what
- the client originally supplied. In this case, the server does
- an internal redirect, constructing a new
- request_rec for the new URI, and processing it
- almost exactly as if the client had requested the new URI
- directly. ErrorDocument is in scope, the same internal
+ redirect machinery comes into play.ErrorDocument is in scope, the same internal
- redirect machinery comes into play.ap_sub_req_lookup_file,
+ ap_sub_req_lookup_uri, and
+ ap_sub_req_method_uri; these construct a new
+ request_rec structure and processes it as you
+ would expect, up to but not including the point of actually
+ sending a response. (These functions skip over the access
+ checks if the sub-request is for a file in the same
+ directory as the original request).ap_sub_req_lookup_file,
- ap_sub_req_lookup_uri, and
- ap_sub_req_method_uri; these construct a new
- request_rec structure and processes it as you
- would expect, up to but not including the point of actually
- sending a response. (These functions skip over the access
- checks if the sub-request is for a file in the same directory
- as the original request).ap_run_sub_req).ap_run_sub_req).
-Handling requests,
+ declining, and returning error codes
+ As discussed above, each handler, when invoked to handle a
+ particular request_rec, has to return an
+ int to indicate what happened. That can either be
-Handling requests, declining, and returning error
- codes
+
+
+ Note that if the error code returned is request_rec, has to return an int to
-indicate what happened. That can either be
+
-
+ REDIRECT,
+ then the module should put a Location in the
+ request's headers_out, to indicate where the
+ client should be redirected to.
-Note that if the error code returned is REDIRECT, then
-the module should put a Location in the request's
-headers_out, to indicate where the client should be
-redirected to. Special
+ considerations for response handlers
+ Handlers for most phases do their work by simply setting a few
+ fields in the request_rec structure (or, in the
+ case of access checkers, simply by returning the correct error
+ code). However, response handlers have to actually send a
+ request back to the client.
-Special considerations for response
- handlers
+ ap_send_http_header. (You don't have
+ to do anything special to skip sending the header for HTTP/0.9
+ requests; the function figures out on its own that it shouldn't
+ do anything). If the request is marked
+ header_only, that's all they should do; they
+ should return after that, without attempting any further
+ output.request_rec structure (or, in the case of access
-checkers, simply by returning the correct error code). However,
-response handlers have to actually send a request back to the client. ap_rputc and ap_rprintf, for
+ internally generated output, and ap_send_fd, to
+ copy the contents of some FILE * straight to the
+ client.ap_send_http_header. (You don't have to do
-anything special to skip sending the header for HTTP/0.9 requests; the
-function figures out on its own that it shouldn't do anything). If
-the request is marked header_only, that's all they should
-do; they should return after that, without attempting any further
-output. ap_rputc
-and ap_rprintf, for internally generated output, and
-ap_send_fd, to copy the contents of some FILE *
-straight to the client. GET requests
-which have no more specific handler; it also shows how conditional
-GETs can be handled, if it's desirable to do so in a
-particular response handler --- ap_set_last_modified checks
-against the If-modified-since value supplied by the
-client, if any, and returns an appropriate code (which will, if
-nonzero, be USE_LOCAL_COPY). No similar considerations apply for
-ap_set_content_length, but it returns an error code for
-symmetry.
+
+ Finally, if all of this is too much of a challenge, there are a
+ few ways out of it. First off, as shown above, a response
+ handler which has not yet produced any output can simply return
+ an error code, in which case the server will automatically
+ produce an error response. Secondly, it can punt to some other
+ handler by invoking GET requests which have no more specific handler;
+ it also shows how conditional GETs can be handled,
+ if it's desirable to do so in a particular response handler ---
+ ap_set_last_modified checks against the
+ If-modified-since value supplied by the client, if
+ any, and returns an appropriate code (which will, if nonzero,
+ be USE_LOCAL_COPY). No similar considerations apply for
+ ap_set_content_length, but it returns an error
+ code for symmetry.
int default_handler (request_rec *r)
{
int errstatus;
@@ -445,7 +499,7 @@ int default_handler (request_rec *r)
if (r->finfo.st_mode == 0) return NOT_FOUND;
if ((errstatus = ap_set_content_length (r, r->finfo.st_size))
- || (errstatus = ap_set_last_modified (r, r->finfo.st_mtime)))
+ || (errstatus = ap_set_last_modified (r, r->finfo.st_mtime)))
return errstatus;
f = fopen (r->filename, "r");
@@ -463,119 +517,125 @@ int default_handler (request_rec *r)
ap_pfclose (r->pool, f);
return OK;
}
-
+ap_internal_redirect, which is
+ how the internal redirection machinery discussed above is
+ invoked. A response handler which has internally redirected
+ should always return OK.
-Finally, if all of this is too much of a challenge, there are a few
-ways out of it. First off, as shown above, a response handler which
-has not yet produced any output can simply return an error code, in
-which case the server will automatically produce an error response.
-Secondly, it can punt to some other handler by invoking
-ap_internal_redirect, which is how the internal redirection
-machinery discussed above is invoked. A response handler which has
-internally redirected should always return OK. ap_internal_redirect from handlers
+ which are not response handlers will lead to serious
+ confusion).ap_internal_redirect from handlers which are
-not response handlers will lead to serious confusion).
+ Special
+ considerations for authentication handlers
+ Stuff that should be discussed here in detail:
-Special considerations for authentication
- handlers
+
+
-ap_auth_type,
+ ap_auth_name, and ap_requires.
-
+ ap_auth_type,
- ap_auth_name, and ap_requires.
- ap_get_basic_auth_pw,
- which sets the connection->user structure field
- automatically, and ap_note_basic_auth_failure, which
- arranges for the proper WWW-Authenticate: header
- to be sent back).
-ap_get_basic_auth_pw, which sets the
+ connection->user structure field
+ automatically, and ap_note_basic_auth_failure,
+ which arranges for the proper WWW-Authenticate:
+ header to be sent back).Special considerations for logging handlers
+ Special
+ considerations for logging handlers
+ When a request has internally redirected, there is the question
+ of what to log. Apache handles this by bundling the entire
+ chain of redirects into a list of request_rec
+ structures which are threaded through the
+ r->prev and r->next pointers.
+ The request_rec which is passed to the logging
+ handlers in such cases is the one which was originally built
+ for the initial request from the client; note that the
+ bytes_sent field will only be correct in the last request in
+ the chain (the one for which a response was actually sent).
-When a request has internally redirected, there is the question of
-what to log. Apache handles this by bundling the entire chain of
-redirects into a list of request_rec structures which are
-threaded through the r->prev and r->next
-pointers. The request_rec which is passed to the logging
-handlers in such cases is the one which was originally built for the
-initial request from the client; note that the bytes_sent field will
-only be correct in the last request in the chain (the one for which a
-response was actually sent).
+ Resource allocation and resource
+ pools
-Resource allocation and resource pools
-ap_pfopen, which also
-arranges for the underlying file descriptor to be closed before any
-child processes, such as for CGI scripts, are execed), or
-in case you are using the timeout machinery (which isn't yet even
-documented here). However, there are two benefits to using it:
-resources allocated to a pool never leak (even if you allocate a
-scratch string, and just forget about it); also, for memory
-allocation, ap_palloc is generally faster than
-malloc.
-Allocation of memory in pools
-ap_palloc, which takes two arguments, one being a pointer to
-a resource pool structure, and the other being the amount of memory to
-allocate (in chars). Within handlers for handling
-requests, the most common way of getting a resource pool structure is
-by looking at the pool slot of the relevant
-request_rec; hence the repeated appearance of the
-following idiom in module code:
-
+
ap_pfopen, which also arranges
+ for the underlying file descriptor to be closed before any
+ child processes, such as for CGI scripts, are
+ execed), or in case you are using the timeout
+ machinery (which isn't yet even documented here). However,
+ there are two benefits to using it: resources allocated to a
+ pool never leak (even if you allocate a scratch string, and
+ just forget about it); also, for memory allocation,
+ ap_palloc is generally faster than
+ malloc.Allocation of memory in pools
+
+ ap_palloc, which takes two arguments, one being a
+ pointer to a resource pool structure, and the other being the
+ amount of memory to allocate (in chars). Within
+ handlers for handling requests, the most common way of getting
+ a resource pool structure is by looking at the
+ pool slot of the relevant
+ request_rec; hence the repeated appearance of the
+ following idiom in module code:
int my_handler(request_rec *r)
{
struct my_structure *foo;
@@ -583,355 +643,374 @@ int my_handler(request_rec *r)
foo = (foo *)ap_palloc (r->pool, sizeof(my_structure));
}
-
-ap_pfree ---
-ap_palloced memory is freed only when the associated
-resource pool is cleared. This means that ap_palloc does not
-have to do as much accounting as malloc(); all it does in
-the typical case is to round up the size, bump a pointer, and do a
-range check.
-ap_palloc
-could cause a server process to grow excessively large. There are
-two ways to deal with this, which are dealt with below; briefly, you
-can use malloc, and try to be sure that all of the memory
-gets explicitly freed, or you can allocate a sub-pool of
-the main pool, allocate your memory in the sub-pool, and clear it out
-periodically. The latter technique is discussed in the section on
-sub-pools below, and is used in the directory-indexing code, in order
-to avoid excessive storage allocation when listing directories with
-thousands of files).
-Allocating initialized memory
-ap_pcalloc has the same
-interface as ap_palloc, but clears out the memory it
-allocates before it returns it. The function ap_pstrdup
-takes a resource pool and a char * as arguments, and
-allocates memory for a copy of the string the pointer points to,
-returning a pointer to the copy. Finally ap_pstrcat is a
-varargs-style function, which takes a pointer to a resource pool, and
-at least two char * arguments, the last of which must be
-NULL. It allocates enough memory to fit copies of each
-of the strings, as a unit; for instance:
-
+
+
+ ap_pfree ---
+ ap_palloced memory is freed only when the
+ associated resource pool is cleared. This means that
+ ap_palloc does not have to do as much accounting
+ as malloc(); all it does in the typical case is to
+ round up the size, bump a pointer, and do a range check.ap_palloc could cause a server process to grow
+ excessively large. There are two ways to deal with this, which
+ are dealt with below; briefly, you can use malloc,
+ and try to be sure that all of the memory gets explicitly
+ freed, or you can allocate a sub-pool of the main
+ pool, allocate your memory in the sub-pool, and clear it out
+ periodically. The latter technique is discussed in the section
+ on sub-pools below, and is used in the directory-indexing code,
+ in order to avoid excessive storage allocation when listing
+ directories with thousands of files).Allocating initialized memory
+
+ ap_pcalloc has
+ the same interface as ap_palloc, but clears out
+ the memory it allocates before it returns it. The function
+ ap_pstrdup takes a resource pool and a char
+ * as arguments, and allocates memory for a copy of the
+ string the pointer points to, returning a pointer to the copy.
+ Finally ap_pstrcat is a varargs-style function,
+ which takes a pointer to a resource pool, and at least two
+ char * arguments, the last of which must be
+ NULL. It allocates enough memory to fit copies of
+ each of the strings, as a unit; for instance:
ap_pstrcat (r->pool, "foo", "/", "bar", NULL);
-
-"foo/bar".
-Commonly-used pools in the Apache Web server
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Tracking open files, etc.
-ap_pfopen, which
-takes a resource pool and two strings as arguments; the strings are
-the same as the typical arguments to fopen, e.g.,
-
+
+
+ "foo/bar".Commonly-used pools in
+ the Apache Web server
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Tracking open files,
+ etc.
+
+ ap_pfopen, which takes a resource pool and two
+ strings as arguments; the strings are the same as the typical
+ arguments to fopen, e.g.,
...
FILE *f = ap_pfopen (r->pool, r->filename, "r");
if (f == NULL) { ... } else { ... }
-
-ap_popenf routine, which parallels the
-lower-level open system call. Both of these routines
-arrange for the file to be closed when the resource pool in question
-is cleared.
-ap_pfopen, and ap_popenf,
-namely ap_pfclose and ap_pclosef. (This is
-because, on many systems, the number of files which a single process
-can have open is quite limited). It is important to use these
-functions to close files allocated with ap_pfopen and
-ap_popenf, since to do otherwise could cause fatal errors on
-systems such as Linux, which react badly if the same
-FILE* is closed more than once.
-close functions is not mandatory, since the
-file will eventually be closed regardless, but you should consider it
-in cases where your module is opening, or could open, a lot of files).
-Other sorts of resources --- cleanup functions
-
-More text goes here. Describe the the cleanup primitives in terms of
-which the file stuff is implemented; also,
-spawn_process.
-Fine control --- creating and dealing with sub-pools, with a note
-on sub-requests
+
-On rare occasions, too-free use of ap_palloc() and the
-associated primitives may result in undesirably profligate resource
-allocation. You can deal with such a case by creating a
-sub-pool, allocating within the sub-pool rather than the main
-pool, and clearing or destroying the sub-pool, which releases the
-resources which were associated with it. (This really is a
-rare situation; the only case in which it comes up in the standard
-module set is in case of listing directories, and then only with
-very large directories. Unnecessary use of the primitives
-discussed here can hair up your code quite a bit, with very little
-gain). ap_popenf routine, which
+ parallels the lower-level open system call. Both
+ of these routines arrange for the file to be closed when the
+ resource pool in question is cleared.ap_make_sub_pool,
-which takes another pool (the parent pool) as an argument. When the
-main pool is cleared, the sub-pool will be destroyed. The sub-pool
-may also be cleared or destroyed at any time, by calling the functions
-ap_clear_pool and ap_destroy_pool, respectively.
-(The difference is that ap_clear_pool frees resources
-associated with the pool, while ap_destroy_pool also
-deallocates the pool itself. In the former case, you can allocate new
-resources within the pool, and clear it again, and so forth; in the
-latter case, it is simply gone). ap_pfopen, and
+ ap_popenf, namely ap_pfclose and
+ ap_pclosef. (This is because, on many systems, the
+ number of files which a single process can have open is quite
+ limited). It is important to use these functions to close files
+ allocated with ap_pfopen and
+ ap_popenf, since to do otherwise could cause fatal
+ errors on systems such as Linux, which react badly if the same
+ FILE* is closed more than once.ap_sub_req_... functions)
-is ap_destroy_sub_req, which frees the resource pool.
-Before calling this function, be sure to copy anything that you care
-about which might be allocated in the sub-request's resource pool into
-someplace a little less volatile (for instance, the filename in its
-request_rec structure). close functions is not mandatory,
+ since the file will eventually be closed regardless, but you
+ should consider it in cases where your module is opening, or
+ could open, a lot of files).ap_destroy_... functions).
+ Other sorts of resources --- cleanup functions
-Configuration, commands and the like
+
+ More text goes here. Describe the the cleanup primitives in
+ terms of which the file stuff is implemented; also,
+
-One of the design goals for this server was to maintain external
-compatibility with the NCSA 1.3 server --- that is, to read the same
-configuration files, to process all the directives therein correctly,
-and in general to be a drop-in replacement for NCSA. On the other
-hand, another design goal was to move as much of the server's
-functionality into modules which have as little as possible to do with
-the monolithic server core. The only way to reconcile these goals is
-to move the handling of most commands from the central server into the
-modules. spawn_process.
+ AddType and
-DefaultType directives, and so forth. In general, the
-governing philosophy is that anything which can be made
-configurable by directory should be; per-server information is
-generally used in the standard set of modules for information like
-Aliases and Redirects which come into play
-before the request is tied to a particular place in the underlying
-file system. Fine control --- creating and dealing with sub-pools, with
+ a note on sub-requests
+ On rare occasions, too-free use of ap_palloc() and
+ the associated primitives may result in undesirably profligate
+ resource allocation. You can deal with such a case by creating
+ a sub-pool, allocating within the sub-pool rather than
+ the main pool, and clearing or destroying the sub-pool, which
+ releases the resources which were associated with it. (This
+ really is a rare situation; the only case in which it
+ comes up in the standard module set is in case of listing
+ directories, and then only with very large
+ directories. Unnecessary use of the primitives discussed here
+ can hair up your code quite a bit, with very little gain).
-Another requirement for emulating the NCSA server is being able to
-handle the per-directory configuration files, generally called
-.htaccess files, though even in the NCSA server they can
-contain directives which have nothing at all to do with access
-control. Accordingly, after URI -> filename translation, but before
-performing any other phase, the server walks down the directory
-hierarchy of the underlying filesystem, following the translated
-pathname, to read any .htaccess files which might be
-present. The information which is read in then has to be
-merged with the applicable information from the server's own
-config files (either from the <Directory> sections
-in access.conf, or from defaults in
-srm.conf, which actually behaves for most purposes almost
-exactly like <Directory />).ap_make_sub_pool, which takes another pool (the
+ parent pool) as an argument. When the main pool is cleared, the
+ sub-pool will be destroyed. The sub-pool may also be cleared or
+ destroyed at any time, by calling the functions
+ ap_clear_pool and ap_destroy_pool,
+ respectively. (The difference is that
+ ap_clear_pool frees resources associated with the
+ pool, while ap_destroy_pool also deallocates the
+ pool itself. In the former case, you can allocate new resources
+ within the pool, and clear it again, and so forth; in the
+ latter case, it is simply gone)..htaccess files, we need to discard the storage allocated
-for handling them. That is solved the same way it is solved wherever
-else similar problems come up, by tying those structures to the
-per-transaction resource pool. ap_sub_req_... functions) is
+ ap_destroy_sub_req, which frees the resource pool.
+ Before calling this function, be sure to copy anything that you
+ care about which might be allocated in the sub-request's
+ resource pool into someplace a little less volatile (for
+ instance, the filename in its request_rec
+ structure).Per-directory configuration structures
+ ap_destroy_... functions).mod_mime.c,
-which defines the file typing handler which emulates the NCSA server's
-behavior of determining file types from suffixes. What we'll be
-looking at, here, is the code which implements the
-AddType and AddEncoding commands. These
-commands can appear in .htaccess files, so they must be
-handled in the module's private per-directory data, which in fact,
-consists of two separate tables for MIME types and
-encoding information, and is declared as follows:
+ Configuration, commands and
+ the like
+ One of the design goals for this server was to maintain
+ external compatibility with the NCSA 1.3 server --- that is, to
+ read the same configuration files, to process all the
+ directives therein correctly, and in general to be a drop-in
+ replacement for NCSA. On the other hand, another design goal
+ was to move as much of the server's functionality into modules
+ which have as little as possible to do with the monolithic
+ server core. The only way to reconcile these goals is to move
+ the handling of most commands from the central server into the
+ modules.
-
+
+ When the server is reading a configuration file, or
+ AddType and DefaultType
+ directives, and so forth. In general, the governing philosophy
+ is that anything which can be made configurable by
+ directory should be; per-server information is generally used
+ in the standard set of modules for information like
+ Aliases and Redirects which come into
+ play before the request is tied to a particular place in the
+ underlying file system..htaccess files, though even in the NCSA
+ server they can contain directives which have nothing at all to
+ do with access control. Accordingly, after URI -> filename
+ translation, but before performing any other phase, the server
+ walks down the directory hierarchy of the underlying
+ filesystem, following the translated pathname, to read any
+ .htaccess files which might be present. The
+ information which is read in then has to be merged
+ with the applicable information from the server's own config
+ files (either from the <Directory> sections
+ in access.conf, or from defaults in
+ srm.conf, which actually behaves for most purposes
+ almost exactly like <Directory />)..htaccess files, we need to discard the
+ storage allocated for handling them. That is solved the same
+ way it is solved wherever else similar problems come up, by
+ tying those structures to the per-transaction resource
+ pool.Per-directory configuration
+ structures
+ Let's look out how all of this plays out in
+ mod_mime.c, which defines the file typing handler
+ which emulates the NCSA server's behavior of determining file
+ types from suffixes. What we'll be looking at, here, is the
+ code which implements the AddType and
+ AddEncoding commands. These commands can appear in
+ .htaccess files, so they must be handled in the
+ module's private per-directory data, which in fact, consists of
+ two separate tables for MIME types and encoding
+ information, and is declared as follows:
+
typedef struct {
table *forced_types; /* Additional AddTyped stuff */
table *encoding_types; /* Added with AddEncoding... */
} mime_dir_config;
-
+<Directory> section, which includes one of
+ the MIME module's commands, it needs to create a
+ mime_dir_config structure, so those commands have
+ something to act on. It does this by invoking the function it
+ finds in the module's `create per-dir config slot', with two
+ arguments: the name of the directory to which this
+ configuration information applies (or NULL for
+ srm.conf), and a pointer to a resource pool in
+ which the allocation should happen.
-When the server is reading a configuration file, or
-<Directory> section, which includes one of the MIME
-module's commands, it needs to create a mime_dir_config
-structure, so those commands have something to act on. It does this
-by invoking the function it finds in the module's `create per-dir
-config slot', with two arguments: the name of the directory to which
-this configuration information applies (or NULL for
-srm.conf), and a pointer to a resource pool in which the
-allocation should happen. .htaccess file, that
+ resource pool is the per-request resource pool for the request;
+ otherwise it is a resource pool which is used for configuration
+ data, and cleared on restarts. Either way, it is important for
+ the structure being created to vanish when the pool is cleared,
+ by registering a cleanup on the pool if necessary)..htaccess file, that resource pool
-is the per-request resource pool for the request; otherwise it is a
-resource pool which is used for configuration data, and cleared on
-restarts. Either way, it is important for the structure being created
-to vanish when the pool is cleared, by registering a cleanup on the
-pool if necessary). ap_pallocs the structure above, and a creates a couple of
-tables to fill it. That looks like this:
-
-
+
+ Now, suppose we've just read in a ap_pallocs the structure above, and a creates
+ a couple of tables to fill it. That looks like
+ this:
void *create_mime_dir_config (pool *p, char *dummy)
{
mime_dir_config *new =
@@ -942,24 +1021,24 @@ void *create_mime_dir_config (pool *p, char *dummy)
return new;
}
-
+.htaccess file.
+ We already have the per-directory configuration structure for
+ the next directory up in the hierarchy. If the
+ .htaccess file we just read in didn't have any
+ AddType or AddEncoding commands, its
+ per-directory config structure for the MIME module is still
+ valid, and we can just use it. Otherwise, we need to merge the
+ two structures somehow.
-Now, suppose we've just read in a .htaccess file. We
-already have the per-directory configuration structure for the next
-directory up in the hierarchy. If the .htaccess file we
-just read in didn't have any AddType or
-AddEncoding commands, its per-directory config structure
-for the MIME module is still valid, and we can just use it.
-Otherwise, we need to merge the two structures somehow.
+
+ As a note --- if there is no per-directory merge function
+ present, the server will just use the subdirectory's
+ configuration info, and ignore the parent's. For some modules,
+ that works just fine (e.g., for the includes module,
+ whose per-directory configuration information consists solely
+ of the state of the
void *merge_mime_dir_configs (pool *p, void *parent_dirv, void *subdirv)
{
mime_dir_config *parent_dir = (mime_dir_config *)parent_dirv;
@@ -974,118 +1053,121 @@ void *merge_mime_dir_configs (pool *p, void *parent_dirv, void *subdirv)
return new;
}
-
+XBITHACK), and for those
+ modules, you can just not declare one, and leave the
+ corresponding structure slot in the module itself
+ NULL.
-As a note --- if there is no per-directory merge function present, the
-server will just use the subdirectory's configuration info, and ignore
-the parent's. For some modules, that works just fine (e.g., for the
-includes module, whose per-directory configuration information
-consists solely of the state of the XBITHACK), and for
-those modules, you can just not declare one, and leave the
-corresponding structure slot in the module itself NULL.Command handling
-
-Now that we have these structures, we need to be able to figure out
-how to fill them. That involves processing the actual
-AddType and AddEncoding commands. To find
-commands, the server looks in the module's command table.
-That table contains information on how many arguments the commands
-take, and in what formats, where it is permitted, and so forth. That
-information is sufficient to allow the server to invoke most
-command-handling functions with pre-parsed arguments. Without further
-ado, let's look at the AddType command handler, which
-looks like this (the AddEncoding command looks basically
-the same, and won't be shown here):
-
-
+
+ This command handler is unusually simple. As you can see, it
+ takes four arguments, two of which are pre-parsed arguments,
+ the third being the per-directory configuration structure for
+ the module in question, and the fourth being a pointer to a
+ Command handling
+ Now that we have these structures, we need to be able to figure
+ out how to fill them. That involves processing the actual
+ AddType and AddEncoding commands. To
+ find commands, the server looks in the module's command
+ table. That table contains information on how many
+ arguments the commands take, and in what formats, where it is
+ permitted, and so forth. That information is sufficient to
+ allow the server to invoke most command-handling functions with
+ pre-parsed arguments. Without further ado, let's look at the
+ AddType command handler, which looks like this
+ (the AddEncoding command looks basically the same,
+ and won't be shown here):
+
char *add_type(cmd_parms *cmd, mime_dir_config *m, char *ct, char *ext)
{
if (*ext == '.') ++ext;
ap_table_set (m->forced_types, ext, ct);
return NULL;
}
-
+cmd_parms structure. That structure contains a
+ bunch of arguments which are frequently of use to some, but not
+ all, commands, including a resource pool (from which memory can
+ be allocated, and to which cleanups should be tied), and the
+ (virtual) server being configured, from which the module's
+ per-server configuration data can be obtained if required.
-This command handler is unusually simple. As you can see, it takes
-four arguments, two of which are pre-parsed arguments, the third being
-the per-directory configuration structure for the module in question,
-and the fourth being a pointer to a cmd_parms structure.
-That structure contains a bunch of arguments which are frequently of
-use to some, but not all, commands, including a resource pool (from
-which memory can be allocated, and to which cleanups should be tied),
-and the (virtual) server being configured, from which the module's
-per-server configuration data can be obtained if required.NULL; this causes an error to be
+ printed out on the server's stderr, followed by a
+ quick exit, if it is in the main config files; for a
+ .htaccess file, the syntax error is logged in the
+ server error log (along with an indication of where it came
+ from), and the request is bounced with a server error response
+ (HTTP error status, code 500).NULL; this causes an error to be printed out on the
-server's stderr, followed by a quick exit, if it is in
-the main config files; for a .htaccess file, the syntax
-error is logged in the server error log (along with an indication of
-where it came from), and the request is bounced with a server error
-response (HTTP error status, code 500).
+
+ The entries in these tables are:
-The entries in these tables are:
+
command_rec mime_cmds[] = {
{ "AddType", add_type, NULL, OR_FILEINFO, TAKE2,
"a mime type followed by a file extension" },
{ "AddEncoding", add_encoding, NULL, OR_FILEINFO, TAKE2,
- "an encoding (e.g., gzip), followed by a file extension" },
+ "an encoding (e.g., gzip), followed by a file extension" },
{ NULL }
};
-
+
+
-
+ (void *) pointer, which is passed in the
- cmd_parms structure to the command handler ---
- this is useful in case many similar commands are handled by the
- same function.
- AllowOverride
- option, and an additional mask bit, RSRC_CONF,
- indicating that the command may appear in the server's own
- config files, but not in any .htaccess
- file.
- TAKE2 indicates two pre-parsed arguments. Other
- options are TAKE1, which indicates one pre-parsed
- argument, FLAG, which indicates that the argument
- should be On or Off, and is passed in
- as a boolean flag, RAW_ARGS, which causes the
- server to give the command the raw, unparsed arguments
- (everything but the command name itself). There is also
- ITERATE, which means that the handler looks the
- same as TAKE1, but that if multiple arguments are
- present, it should be called multiple times, and finally
- ITERATE2, which indicates that the command handler
- looks like a TAKE2, but if more arguments are
- present, then it should be called multiple times, holding the
- first argument constant.
- NULL).
-request_rec's per-directory configuration vector by using
-the ap_get_module_config function.
+ (void *) pointer, which is passed in the
+ cmd_parms structure to the command handler ---
+ this is useful in case many similar commands are handled by
+ the same function.
+
AllowOverride option, and an additional mask
+ bit, RSRC_CONF, indicating that the command may
+ appear in the server's own config files, but not in
+ any .htaccess file.TAKE2 indicates two pre-parsed arguments. Other
+ options are TAKE1, which indicates one
+ pre-parsed argument, FLAG, which indicates that
+ the argument should be On or Off,
+ and is passed in as a boolean flag, RAW_ARGS,
+ which causes the server to give the command the raw, unparsed
+ arguments (everything but the command name itself). There is
+ also ITERATE, which means that the handler looks
+ the same as TAKE1, but that if multiple
+ arguments are present, it should be called multiple times,
+ and finally ITERATE2, which indicates that the
+ command handler looks like a TAKE2, but if more
+ arguments are present, then it should be called multiple
+ times, holding the first argument constant.NULL).request_rec's per-directory configuration
+ vector by using the ap_get_module_config function.
+
+
int find_ct(request_rec *r)
{
int i;
@@ -1121,29 +1203,29 @@ int find_ct(request_rec *r)
return OK;
}
-
+
-Side notes --- per-server configuration, virtual
- servers, etc.
+ Side notes --- per-server
+ configuration, virtual servers, etc.
+ The basic ideas behind per-server module configuration are
+ basically the same as those for per-directory configuration;
+ there is a creation function and a merge function, the latter
+ being invoked where a virtual server has partially overridden
+ the base server configuration, and a combined structure must be
+ computed. (As with per-directory configuration, the default if
+ no merge function is specified, and a module is configured in
+ some virtual server, is that the base configuration is simply
+ ignored).
-The basic ideas behind per-server module configuration are basically
-the same as those for per-directory configuration; there is a creation
-function and a merge function, the latter being invoked where a
-virtual server has partially overridden the base server configuration,
-and a combined structure must be computed. (As with per-directory
-configuration, the default if no merge function is specified, and a
-module is configured in some virtual server, is that the base
-configuration is simply ignored). cmd_parms data to get at it. Here's an example, from the
-alias module, which also indicates how a syntax error can be returned
-(note that the per-directory configuration argument to the command
-handler is declared as a dummy, since the module doesn't actually have
-per-directory config data):
-
-
+
+
+
+
+
diff --git a/docs/manual/developer/debugging.html b/docs/manual/developer/debugging.html
index 12d2d11eeb..79594dfbe5 100644
--- a/docs/manual/developer/debugging.html
+++ b/docs/manual/developer/debugging.html
@@ -1,137 +1,221 @@
-
-
-
-cmd_parms data to get at it. Here's an
+ example, from the alias module, which also indicates how a
+ syntax error can be returned (note that the per-directory
+ configuration argument to the command handler is declared as a
+ dummy, since the module doesn't actually have per-directory
+ config data):
char *add_redirect(cmd_parms *cmd, void *dummy, char *f, char *url)
{
server_rec *s = cmd->server;
@@ -1156,6 +1238,8 @@ char *add_redirect(cmd_parms *cmd, void *dummy, char *f, char *url)
new->fake = f; new->real = url;
return NULL;
}
-
-
-
+Debugging Memory Allocation in APR
+
+
-Debugging Memory Allocation in APR
-
+
+
-
-Allocation Debugging
+
+
+ ALLOC_DEBUG
-
-Malloc Support
-ALLOC_USE_MALLOC
-Allocation Debugging
-ALLOC_DEBUG
-Pool Debugging
-POOL_DEBUG
-Table Debugging
-MAKE_TABLE_PROFILE
-Malloc Support
-table_push: apr_table_t created by 0x804d874 hit limit of 10
-ALLOC_USE_MALLOC
-Allocation Statistics
-ALLOC_STATS
-
+ Pool Debugging
-
-Allowable Combinations
+ POOL_DEBUG
-
-
+
-
-Option 1
-ALLOC
-
DEBUGALLOC
-
USE
MALLOCPOOL
-
DEBUGMAKE
-
TABLE
PROFILEALLOC
-
STATS
-
-ALLOC_DEBUG
-
-No
-Yes
-Yes
-Yes
-
-
-ALLOC_USE
-
MALLOCNo
-
-No
-No
-No
-
-
-POOL_DEBUG
-Yes
-No
-
-Yes
-Yes
-
-
-MAKE_TABLE
-
PROFILEYes
-No
-Yes
-
-Yes
-
-
+ ALLOC_STATS
-Yes
-No
-Yes
-Yes
-
-Table Debugging
-MAKE_TABLE_PROFILE
-
+ Activating Debugging Options
-
+table_push: apr_table_t created by 0x804d874 hit limit of 10
+
+ Allocation Statistics
+
+ ALLOC_STATS
+
+
+
+ Allowable Combinations
+
+
+
+
+
+
+
+ Option 1
+
+ ALLOC
+
+
+ DEBUGALLOC
+
+
+ USE
+ MALLOCPOOL
+
+
+ DEBUGMAKE
+
+
+ TABLE
+ PROFILEALLOC
+
+ STATS
+
+
+ ALLOC_DEBUG
+
+
+
+ No
+
+ Yes
+
+ Yes
+
+ Yes
+
+
+
+ ALLOC_USE
+
+
+ MALLOCNo
+
+
+
+ No
+
+ No
+
+ No
+
+
+
+ POOL_DEBUG
+
+ Yes
+
+ No
+
+
+
+ Yes
+
+ Yes
+
+
+
+ MAKE_TABLE
+
+
+ PROFILEYes
+
+ No
+
+ Yes
+
+
+
+ Yes
+
+
+ ALLOC_STATS
+
+ Yes
+
+ No
+
+ Yes
+
+ Yes
+
+
+
+
+ Activating Debugging Options
+
+
/*
#define ALLOC_DEBUG
@@ -162,17 +246,12 @@ typedef struct ap_pool_t {
}ap_pool_t;
-Documentating Apache 2.0
+
- To end a documentation block, use */Documentating Apache 2.0
-
+ To end a documentation block, use */
Description of this functions purpose
@param parameter_name description
-
The deffunc is not always necessary. DoxyGen does not have a full parser
+
+
+The deffunc is not always necessary. DoxyGen does not have a full parser
in it, so any prototype that use a macro in the return type declaration
- is too complex for scandoc. Those functions require a deffunc.
An example (using > rather than >):
+/** * return the final element of the pathname @@ -48,17 +52,22 @@ * @deffunc const char * ap_filename_of_pathname(const char *pathname) */- -
At the top of the header file, always include:
++
+At the top of the header file, always include: +
/** * @package Name of library header */- -
ScanDoc uses a new html file for each package. The html files are named - {Name_of_library_header}.html, so try to be concise with your names.
+++ + diff --git a/docs/manual/developer/footer.html b/docs/manual/developer/footer.html index 1e5f739ebe..edcc022ccc 100644 --- a/docs/manual/developer/footer.html +++ b/docs/manual/developer/footer.html @@ -1,8 +1,19 @@ -
+ScanDoc uses a new html file for each package. The html files are named + {Name_of_library_header}.html, so try to be concise with your names. - +
+
+
+
-
-
diff --git a/docs/manual/developer/header.html b/docs/manual/developer/header.html
index 9533b02bda..6c4764044e 100644
--- a/docs/manual/developer/header.html
+++ b/docs/manual/developer/header.html
@@ -1,6 +1,19 @@
-
-
+
+ In general, a hook function is one that Apache will call at some -point during the processing of a request. Modules can provide -functions that are called, and specify when they get called in -comparison to other modules.
+In general, a hook function is one that Apache will call at + some point during the processing of a request. Modules can + provide functions that are called, and specify when they get + called in comparison to other modules.
-In order to create a new hook, four things need to be done:
+In order to create a new hook, four things need to be + done:
-Use the AP_DECLARE_HOOK macro, which needs to be given the return -type of the hook function, the name of the hook, and the arguments. For -example, if the hook returns an int and takes a -request_rec * and an int and is called -"do_something", then declare it like this:
+Use the AP_DECLARE_HOOK macro, which needs to be given the + return type of the hook function, the name of the hook, and the + arguments. For example, if the hook returns an int and + takes a request_rec * and an int and is + called "do_something", then declare it like this:
+ AP_DECLARE_HOOK(int,do_something,(request_rec *r,int + n)) -This should go in a header which modules will include if they want -to use the hook.
+This should go in a header which modules will include if + they want to use the hook.
-Each source file that exports a hook has a private structure which -is used to record the module functions that use the hook. This is -declared as follows:
- -+-Each source file that exports a hook has a private structure + which is used to record the module functions that use the hook. + This is declared as follows:
+APR_HOOK_STRUCT( APR_HOOK_LINK(do_something) ... ) -+
The source file that exports the hook has to implement a function -that will call the hook. There are currently three possible ways to do -this. In all cases, the calling function is called -ap_run_hookname().
+The source file that exports the hook has to implement a + function that will call the hook. There are currently three + possible ways to do this. In all cases, the calling function is + called ap_run_hookname().
-If the return value of a hook is void, then all the hooks are -called, and the caller is implemented like this:
+If the return value of a hook is void, then all the + hooks are called, and the caller is implemented like this:
+ AP_IMPLEMENT_HOOK_VOID(do_something,(request_rec *r,int + n),(r,n)) -AP_IMPLEMENT_HOOK_VOID(do_something,(request_rec *r,int -n),(r,n)) - -The second and third arguments are the dummy argument declaration and -the dummy arguments as they will be used when calling the hook. In -other words, this macro expands to something like this:
- -+-The second and third arguments are the dummy argument + declaration and the dummy arguments as they will be used when + calling the hook. In other words, this macro expands to + something like this:
+void ap_run_do_something(request_rec *r,int n) { ... do_something(r,n); } -+
If the hook returns a value, then it can either be run until the first -hook that does something interesting, like so:
+If the hook returns a value, then it can either be run until + the first hook that does something interesting, like so:
+ AP_IMPLEMENT_HOOK_RUN_FIRST(int,do_something,(request_rec + *r,int n),(r,n),DECLINED) -AP_IMPLEMENT_HOOK_RUN_FIRST(int,do_something,(request_rec *r,int n),(r,n),DECLINED) +The first hook that doesn't return DECLINED + stops the loop and its return value is returned from the hook + caller. Note that DECLINED is the tradition Apache + hook return meaning "I didn't do anything", but it can be + whatever suits you.
-The first hook that doesn't return DECLINED stops -the loop and its return value is returned from the hook caller. Note -that DECLINED is the tradition Apache hook return meaning "I -didn't do anything", but it can be whatever suits you.
+Alternatively, all hooks can be run until an error occurs. + This boils down to permitting two return values, one of + which means "I did something, and it was OK" and the other + meaning "I did nothing". The first function that returns a + value other than one of those two stops the loop, and its + return is the return value. Declare these like so:
+ AP_IMPLEMENT_HOOK_RUN_ALL(int,do_something,(request_rec + *r,int n),(r,n),OK,DECLINED) -Alternatively, all hooks can be run until an error occurs. This -boils down to permitting two return values, one of which means -"I did something, and it was OK" and the other meaning "I did -nothing". The first function that returns a value other than one of -those two stops the loop, and its return is the return value. Declare -these like so:
+Again, OK and DECLINED are the traditional + values. You can use what you want.
-AP_IMPLEMENT_HOOK_RUN_ALL(int,do_something,(request_rec *r,int -n),(r,n),OK,DECLINED) +Again, OK and DECLINED are the traditional -values. You can use what you want.
- -At appropriate moments in the code, call the hook caller, like -so:
- -+-At appropriate moments in the code, call the hook caller, + like so:
+int n,ret; request_rec *r; ret=ap_run_do_something(r,n); -+
A module that wants a hook to be called needs to do two -things.
+A module that wants a hook to be called needs to do two + things.
-Include the appropriate header, and define a static function of the -correct type:
- -+-Include the appropriate header, and define a static function + of the correct type:
+static int my_something_doer(request_rec *r,int n) { ... return OK; } -+
During initialisation, Apache will call each modules hook -registering function, which is included in the module structure:
- -+-During initialisation, Apache will call each modules hook + registering function, which is included in the module + structure:
+static void my_register_hooks() { ap_hook_do_something(my_something_doer,NULL,NULL,HOOK_MIDDLE); @@ -150,58 +145,55 @@ mode MODULE_VAR_EXPORT my_module = ... my_register_hooks /* register hooks */ }; -+
In the example above, we didn't use the three arguments in the hook -registration function that control calling order. There are two -mechanisms for doing this. The first, rather crude, method, allows us -to specify roughly where the hook is run relative to other -modules. The final argument control this. There are three possible -values:
- -+-In the example above, we didn't use the three arguments in + the hook registration function that control calling order. + There are two mechanisms for doing this. The first, rather + crude, method, allows us to specify roughly where the hook is + run relative to other modules. The final argument control this. + There are three possible values:
+HOOK_FIRST HOOK_MIDDLE HOOK_LAST -+
All modules using any particular value may be run in any order -relative to each other, but, of course, all modules using -HOOK_FIRST will be run before HOOK_MIDDLE which are -before HOOK_LAST. Modules that don't care when they are run -should use HOOK_MIDDLE. (I spaced these out so people -could do stuff like HOOK_FIRST-2 to get in slightly earlier, -but is this wise? - Ben)
+All modules using any particular value may be run in any + order relative to each other, but, of course, all modules using + HOOK_FIRST will be run before HOOK_MIDDLE + which are before HOOK_LAST. Modules that don't care + when they are run should use HOOK_MIDDLE. (I spaced + these out so people could do stuff like HOOK_FIRST-2 + to get in slightly earlier, but is this wise? - Ben)
-Note that there are two more values, HOOK_REALLY_FIRST and -HOOK_REALLY_LAST. These should only be used by the hook -exporter.
+Note that there are two more values, + HOOK_REALLY_FIRST and HOOK_REALLY_LAST. These + should only be used by the hook exporter.
-The other method allows finer control. When a module knows that it -must be run before (or after) some other modules, it can specify them -by name. The second (third) argument is a NULL-terminated array of -strings consisting of the names of modules that must be run before -(after) the current module. For example, suppose we want "mod_xyz.c" -and "mod_abc.c" to run before we do, then we'd hook as follows:
- -+-The other method allows finer control. When a module knows + that it must be run before (or after) some other modules, it + can specify them by name. The second (third) argument is a + NULL-terminated array of strings consisting of the names of + modules that must be run before (after) the current module. For + example, suppose we want "mod_xyz.c" and "mod_abc.c" to run + before we do, then we'd hook as follows:
+static void register_hooks() { static const char * const aszPre[]={ "mod_xyz.c", "mod_abc.c", NULL }; ap_hook_do_something(my_something_doer,aszPre,NULL,HOOK_MIDDLE); } -+
Note that the sort used to achieve this is stable, so ordering set -by HOOK_ORDER is preserved, as far as is -possible.
- -Ben Laurie, 15th August 1999 - - - - +Note that the sort used to achieve this is stable, so + ordering set by HOOK_ORDER is preserved, as far + as is possible.
+ Ben Laurie, 15th August 1999 + + + diff --git a/docs/manual/developer/index.html b/docs/manual/developer/index.html index 8da1052cdd..c825f96948 100644 --- a/docs/manual/developer/index.html +++ b/docs/manual/developer/index.html @@ -1,40 +1,54 @@ - - - -Many of the documents on these Developer pages are lifted from Apache 1.3's - documentation. While they are all being updated to Apache 2.0, they are - in different stages of progress. Please be patient, and point out any - discrepancies or errors on the developer/ pages directly to the - dev@httpd.apache.org mailing list.
+ + -Many of the documents on these Developer pages are lifted + from Apache 1.3's documentation. While they are all being + updated to Apache 2.0, they are in different stages of + progress. Please be patient, and point out any discrepancies or + errors on the developer/ pages directly to the + dev@httpd.apache.org mailing list.
+ +Layered I/O has been the holy grail of Apache module writers for years. -With Apache 2.0, module writers can finally take advantage of layered I/O -in their modules. +
+In all previous versions of Apache, only one handler was allowed to modify -the data stream that was sent to the client. With Apache 2.0, one module -can modify the data and then specify that other modules can modify the data -if they would like. +
Layered I/O has been the holy grail of Apache module writers + for years. With Apache 2.0, module writers can finally take + advantage of layered I/O in their modules.
-In all previous versions of Apache, only one handler was + allowed to modify the data stream that was sent to the client. + With Apache 2.0, one module can modify the data and then + specify that other modules can modify the data if they would + like.
-In order to make a module use layered I/O, there are some modifications -needed. A new return value has been added for modules, RERUN_HANDLERS. -When a handler returns this value, the core searches through the list of -handlers looking for another module that wants to try the request. +
When a module returns RERUN_HANDLERS, it must modify two fields of the -request_rec, the handler and content_type fields. Most modules will -set the handler field to NULL, and allow the core to choose the which -module gets run next. If these two fields are not modified, then the server -will loop forever calling the same module's handler. +
In order to make a module use layered I/O, there are some + modifications needed. A new return value has been added for + modules, RERUN_HANDLERS. When a handler returns this value, the + core searches through the list of handlers looking for another + module that wants to try the request.
-Most modules should not write out to the network if they want to take -advantage of layered I/O. Two BUFF structures have been added to the -request_rec, one for input and one for output. The module should read and -write to these BUFFs. The module will also have to setup the input field for -the next module in the list. A new function has been added, ap_setup_input, -which all modules should call before they do any reading to get data to modify. -This function checks to determine if the previous module set the input field, -if so, that input is used, if not the file is opened and that data source -is used. The output field is used basically the same way. The module must -set this field before they call ap_r* in order to take advantage of -layered I/O. If this field is not set, ap_r* will write directly to the -client. Usually at the end of a handler, the input (for the next module) -will be the read side of a pipe, and the output will be the write side of -the same pipe. +
When a module returns RERUN_HANDLERS, it must modify two + fields of the request_rec, the handler and content_type fields. + Most modules will set the handler field to NULL, and allow the + core to choose the which module gets run next. If these two + fields are not modified, then the server will loop forever + calling the same module's handler.
-Most modules should not write out to the network if they + want to take advantage of layered I/O. Two BUFF structures have + been added to the request_rec, one for input and one for + output. The module should read and write to these BUFFs. The + module will also have to setup the input field for the next + module in the list. A new function has been added, + ap_setup_input, which all modules should call before they do + any reading to get data to modify. This function checks to + determine if the previous module set the input field, if so, + that input is used, if not the file is opened and that data + source is used. The output field is used basically the same + way. The module must set this field before they call ap_r* in + order to take advantage of layered I/O. If this field is not + set, ap_r* will write directly to the client. Usually at the + end of a handler, the input (for the next module) will be the + read side of a pipe, and the output will be the write side of + the same pipe.
-This example is the most basic layered I/O example possible. It is -basically CGIs generated by mod_cgi and sent to the network via http_core. +
mod_cgi executes the cgi script, and then sets request_rec->input to -the output pipe of the CGI. It then NULLs out request_rec->handler, and -sets request_rec->content_type to whatever the CGI writes out (in this case, -text/html). Finally, mod_cgi returns RERUN_HANDLERS. +
This example is the most basic layered I/O example possible. + It is basically CGIs generated by mod_cgi and sent to the + network via http_core.
-ap_invoke_handlers() then loops back to the top of the handler list -and searches for a handler that can deal with this content_type. In this case -the correct module is the default_handler from http_core. +
mod_cgi executes the cgi script, and then sets + request_rec->input to the output pipe of the CGI. It then + NULLs out request_rec->handler, and sets + request_rec->content_type to whatever the CGI writes out (in + this case, text/html). Finally, mod_cgi returns + RERUN_HANDLERS.
-When default handler starts, it calls ap_setup_input, which has found -a valid request_rec->input, so that is used for all inputs. The output field -in the request_rec is NULL, so when default_handler calls an output primitive -it gets sent out over the network.
+ap_invoke_handlers() then loops back to the top of the + handler list and searches for a handler that can deal with this + content_type. In this case the correct module is the + default_handler from http_core.
-Ryan Bloom, 25th March 2000 - +When default handler starts, it calls ap_setup_input, which + has found a valid request_rec->input, so that is used for + all inputs. The output field in the request_rec is NULL, so + when default_handler calls an output primitive it gets sent out + over the network.
+ Ryan Bloom, 25th March 2000 + + diff --git a/docs/manual/developer/modules.html b/docs/manual/developer/modules.html index d2d898f88f..ebf7c33da2 100644 --- a/docs/manual/developer/modules.html +++ b/docs/manual/developer/modules.html @@ -1,65 +1,84 @@ - - - --This is a first attempt at writing the lessons I learned when trying to convert the mod_mmap_static module to Apache 2.0. It's by no means definitive and probably won't even be correct in some ways, but it's a start. -
--These now need to be of type apr_status_t and return a value of that type. Normally the return value will be APR_SUCCESS unless there is some need to signal an error in the cleanup. Be aware that even though you signal an error not all code yet checks and acts upon the error. -
+This is a first attempt at writing the lessons I learned + when trying to convert the mod_mmap_static module to Apache + 2.0. It's by no means definitive and probably won't even be + correct in some ways, but it's a start.
+-These should now be renamed to better signify where they sit in the overall process. So the name gets a small change from mmap_init to mmap_post_config. The arguments passed have undergone a radical change and now look like -
--A lot of the data types have been moved into the APR. This means that some have had a name change, such as the one shown above. The following is a brief list of some of the changes that you are likely to have to make. -
These now need to be of type apr_status_t and return a value + of that type. Normally the return value will be APR_SUCCESS + unless there is some need to signal an error in the cleanup. Be + aware that even though you signal an error not all code yet + checks and acts upon the error.
--The new architecture uses a series of hooks to provide for calling your functions. These you'll need to add to your module by way of a new function, static void register_hooks(void). The function is really reasonably straightforward once you understand what needs to be done. Each function that needs calling at some stage in the processing of a request needs to be registered, handlers do not. There are a number of phases where functions can be added, and for each you can specify with a high degree of control the relative order that the function will be called in. -
--This is the code that was added to mod_mmap_static: -
+These should now be renamed to better signify where they sit + in the overall process. So the name gets a small change from + mmap_init to mmap_post_config. The arguments passed have + undergone a radical change and now look like
+ +A lot of the data types have been moved into the APR. This + means that some have had a name change, such as the one shown + above. The following is a brief list of some of the changes + that you are likely to have to make.
+ +The new architecture uses a series of hooks to provide for + calling your functions. These you'll need to add to your module + by way of a new function, static void register_hooks(void). The + function is really reasonably straightforward once you + understand what needs to be done. Each function that needs + calling at some stage in the processing of a request needs to + be registered, handlers do not. There are a number of phases + where functions can be added, and for each you can specify with + a high degree of control the relative order that the function + will be called in.
+ +This is the code that was added to mod_mmap_static:
static void register_hooks(void)
{
@@ -68,31 +87,43 @@ static void register_hooks(void)
ap_hook_translate_name(mmap_static_xlat,aszPre,NULL,HOOK_LAST);
};
--This registers 2 functions that need to be called, one in the post_config stage (virtually every module will need this one) and one for the translate_name phase. note that while there are different function names the format of each is identical. So what is the format? -
--ap_hook_[phase_name](function_name, predecessors, successors, position); -
--There are 3 hook positions defined... -
--To define the position you use the position and then modify it with the predecessors and successors. each of the modifiers can be a list of functions that should be called, either before the function is run (predecessors) or after the function has run (successors). -
--In the mod_mmap_static case I didn't care about the post_config stage, but the mmap_static_xlat MUST be called after the core module had done it's name translation, hence the use of the aszPre to define a modifier to the position HOOK_LAST. -
--There are now a lot fewer stages to worry about when creating your module definition. The old defintion looked like -
+This registers 2 functions that need to be called, one in + the post_config stage (virtually every module will need this + one) and one for the translate_name phase. note that while + there are different function names the format of each is + identical. So what is the format?
+ +ap_hook_[phase_name](function_name, predecessors, + successors, position);
+ +There are 3 hook positions defined...
+ +To define the position you use the position and then modify + it with the predecessors and successors. each of the modifiers + can be a list of functions that should be called, either before + the function is run (predecessors) or after the function has + run (successors).
+ +In the mod_mmap_static case I didn't care about the + post_config stage, but the mmap_static_xlat MUST be called + after the core module had done it's name translation, hence the + use of the aszPre to define a modifier to the position + HOOK_LAST.
+ +There are now a lot fewer stages to worry about when + creating your module definition. The old defintion looked + like
module MODULE_VAR_EXPORT [module_name]_module =
{
@@ -117,9 +148,8 @@ module MODULE_VAR_EXPORT [module_name]_module =
/* post read-request */
};
--The new structure is a great deal simpler... -
+ +The new structure is a great deal simpler...
module MODULE_VAR_EXPORT [module_name]_module =
{
@@ -133,107 +163,96 @@ module MODULE_VAR_EXPORT [module_name]_module =
/* register hooks */
};
--Some of these read directly across, some don't. I'll try to summarise what should be done below. -
--The stages that read directly across : -
--The remainder of the old functions should be registered as hooks. There are the following hook stages defined so far... -
-Some of these read directly across, some don't. I'll try to + summarise what should be done below.
+The stages that read directly across :
- +The remainder of the old functions should be registered as + hooks. There are the following hook stages defined so + far...
+ +-This is a first attempt at writing the lessons I learned when trying to convert the mod_mmap_static module to Apache 2.0. It's by no means definitive and probably won't even be correct in some ways, but it's a start. -
--These now need to be of type apr_status_t and return a value of that type. Normally the return value will be APR_SUCCESS unless there is some need to signal an error in the cleanup. Be aware that even though you signal an error not all code yet checks and acts upon the error. -
+This is a first attempt at writing the lessons I learned + when trying to convert the mod_mmap_static module to Apache + 2.0. It's by no means definitive and probably won't even be + correct in some ways, but it's a start.
+-These should now be renamed to better signify where they sit in the overall process. So the name gets a small change from mmap_init to mmap_post_config. The arguments passed have undergone a radical change and now look like -
--A lot of the data types have been moved into the APR. This means that some have had a name change, such as the one shown above. The following is a brief list of some of the changes that you are likely to have to make. -
These now need to be of type apr_status_t and return a value + of that type. Normally the return value will be APR_SUCCESS + unless there is some need to signal an error in the cleanup. Be + aware that even though you signal an error not all code yet + checks and acts upon the error.
--The new architecture uses a series of hooks to provide for calling your functions. These you'll need to add to your module by way of a new function, static void register_hooks(void). The function is really reasonably straightforward once you understand what needs to be done. Each function that needs calling at some stage in the processing of a request needs to be registered, handlers do not. There are a number of phases where functions can be added, and for each you can specify with a high degree of control the relative order that the function will be called in. -
--This is the code that was added to mod_mmap_static: -
+These should now be renamed to better signify where they sit + in the overall process. So the name gets a small change from + mmap_init to mmap_post_config. The arguments passed have + undergone a radical change and now look like
+ +A lot of the data types have been moved into the APR. This + means that some have had a name change, such as the one shown + above. The following is a brief list of some of the changes + that you are likely to have to make.
+ +The new architecture uses a series of hooks to provide for + calling your functions. These you'll need to add to your module + by way of a new function, static void register_hooks(void). The + function is really reasonably straightforward once you + understand what needs to be done. Each function that needs + calling at some stage in the processing of a request needs to + be registered, handlers do not. There are a number of phases + where functions can be added, and for each you can specify with + a high degree of control the relative order that the function + will be called in.
+ +This is the code that was added to mod_mmap_static:
static void register_hooks(void)
{
@@ -68,31 +87,43 @@ static void register_hooks(void)
ap_hook_translate_name(mmap_static_xlat,aszPre,NULL,HOOK_LAST);
};
--This registers 2 functions that need to be called, one in the post_config stage (virtually every module will need this one) and one for the translate_name phase. note that while there are different function names the format of each is identical. So what is the format? -
--ap_hook_[phase_name](function_name, predecessors, successors, position); -
--There are 3 hook positions defined... -
--To define the position you use the position and then modify it with the predecessors and successors. each of the modifiers can be a list of functions that should be called, either before the function is run (predecessors) or after the function has run (successors). -
--In the mod_mmap_static case I didn't care about the post_config stage, but the mmap_static_xlat MUST be called after the core module had done it's name translation, hence the use of the aszPre to define a modifier to the position HOOK_LAST. -
--There are now a lot fewer stages to worry about when creating your module definition. The old defintion looked like -
+This registers 2 functions that need to be called, one in + the post_config stage (virtually every module will need this + one) and one for the translate_name phase. note that while + there are different function names the format of each is + identical. So what is the format?
+ +ap_hook_[phase_name](function_name, predecessors, + successors, position);
+ +There are 3 hook positions defined...
+ +To define the position you use the position and then modify + it with the predecessors and successors. each of the modifiers + can be a list of functions that should be called, either before + the function is run (predecessors) or after the function has + run (successors).
+ +In the mod_mmap_static case I didn't care about the + post_config stage, but the mmap_static_xlat MUST be called + after the core module had done it's name translation, hence the + use of the aszPre to define a modifier to the position + HOOK_LAST.
+ +There are now a lot fewer stages to worry about when + creating your module definition. The old defintion looked + like
module MODULE_VAR_EXPORT [module_name]_module =
{
@@ -117,9 +148,8 @@ module MODULE_VAR_EXPORT [module_name]_module =
/* post read-request */
};
--The new structure is a great deal simpler... -
+ +The new structure is a great deal simpler...
module MODULE_VAR_EXPORT [module_name]_module =
{
@@ -133,107 +163,96 @@ module MODULE_VAR_EXPORT [module_name]_module =
/* register hooks */
};
--Some of these read directly across, some don't. I'll try to summarise what should be done below. -
--The stages that read directly across : -
--The remainder of the old functions should be registered as hooks. There are the following hook stages defined so far... -
-Some of these read directly across, some don't. I'll try to + summarise what should be done below.
+The stages that read directly across :
- +The remainder of the old functions should be registered as + hooks. There are the following hook stages defined so + far...
+ +Warning - this is a first (fast) draft that needs further revision!
+Several changes in Apache 2.0 affect the internal request processing - mechanics. Module authors need to be aware of these changes so they - may take advantage of the optimizations and security enhancements.
+Warning - this is a first (fast) draft that needs further + revision!
-The first major change is to the subrequest and redirect mechanisms.
- There were a number of different code paths in Apache 1.3 to attempt
- to optimize subrequest or redirect behavior. As patches were introduced
- to 2.0, these optimizations (and the server behavior) were quickly broken
- due to this duplication of code. All duplicate code has been folded
- back into ap_process_internal_request() to prevent the
- code from falling out of sync again.
Several changes in Apache 2.0 affect the internal request + processing mechanics. Module authors need to be aware of these + changes so they may take advantage of the optimizations and + security enhancements.
-This means that much of the existing code was 'unoptimized'. It is - the Apache HTTP Project's first goal to create a robust and correct - implementation of the HTTP server RFC. Additional goals include - security, scalability and optimization. New methods were sought to - optimize the server (beyond the performance of Apache 1.3) without - introducing fragile or insecure code.
+The first major change is to the subrequest and redirect
+ mechanisms. There were a number of different code paths in
+ Apache 1.3 to attempt to optimize subrequest or redirect
+ behavior. As patches were introduced to 2.0, these
+ optimizations (and the server behavior) were quickly broken due
+ to this duplication of code. All duplicate code has been folded
+ back into ap_process_internal_request() to prevent
+ the code from falling out of sync again.
This means that much of the existing code was 'unoptimized'. + It is the Apache HTTP Project's first goal to create a robust + and correct implementation of the HTTP server RFC. Additional + goals include security, scalability and optimization. New + methods were sought to optimize the server (beyond the + performance of Apache 1.3) without introducing fragile or + insecure code.
-All requests pass through ap_process_request_internal()
- in request.c, including subrequests and redirects. If a module doesn't
- pass generated requests through this code, the author is cautioned that
- the module may be broken by future changes to request processing.
To streamline requests, the module author can take advantage of the - hooks offered to drop out of the request cycle early, or to bypass - core Apache hooks which are irrelevant (and costly in terms of CPU.)
+All requests pass through
+ ap_process_request_internal() in request.c,
+ including subrequests and redirects. If a module doesn't pass
+ generated requests through this code, the author is cautioned
+ that the module may be broken by future changes to request
+ processing.
To streamline requests, the module author can take advantage + of the hooks offered to drop out of the request cycle early, or + to bypass core Apache hooks which are irrelevant (and costly in + terms of CPU.)
-The request's parsed_uri path is unescaped, once and only once, at the - beginning of internal request processing.
+This step is bypassed if the proxyreq flag is set, or the parsed_uri.path - element is unset. The module has no further control of this one-time - unescape operation, either failing to unescape or multiply unescaping - the URL leads to security reprecussions.
+The request's parsed_uri path is unescaped, once and only + once, at the beginning of internal request processing.
-This step is bypassed if the proxyreq flag is set, or the + parsed_uri.path element is unset. The module has no further + control of this one-time unescape operation, either failing to + unescape or multiply unescaping the URL leads to security + reprecussions.
-All /../ and /./ elements are removed by
- ap_getparents(). This helps to ensure the path is (nearly)
- absolute before the request processing continues.
This step cannot be bypassed.
+All /../ and /./ elements are
+ removed by ap_getparents(). This helps to ensure
+ the path is (nearly) absolute before the request processing
+ continues.
This step cannot be bypassed.
-Every request is subject to an ap_location_walk() call.
- This ensures that <Location > sections are consistently enforced for
- all requests. If the request is an internal redirect or a sub-request,
- it may borrow some or all of the processing from the previous or parent
- request's ap_location_walk, so this step is generally very efficient
- after processing the main request.
Every request is subject to an
+ ap_location_walk() call. This ensures that
+ <Location > sections are consistently enforced for all
+ requests. If the request is an internal redirect or a
+ sub-request, it may borrow some or all of the processing from
+ the previous or parent request's ap_location_walk, so this step
+ is generally very efficient after processing the main
+ request.
Modules can determine the file name, or alter the given URI in this step. - For example, mod_vhost_alias will translate the URI's path into the - configured virtual host, mod_alias will translate the path to an alias - path, and if the request falls back on the core, the DocumentRoot is - prepended to the request resource. +
If all modules DECLINE this phase, an error 500 is returned to the browser, - and a "couldn't translate name" error is logged automatically.
+Modules can determine the file name, or alter the given URI + in this step. For example, mod_vhost_alias will translate the + URI's path into the configured virtual host, mod_alias will + translate the path to an alias path, and if the request falls + back on the core, the DocumentRoot is prepended to the request + resource.
-If all modules DECLINE this phase, an error 500 is returned + to the browser, and a "couldn't translate name" error is logged + automatically.
-After the file or correct URI was determined, the appropriate per-dir - configurations are merged together. For example, mod_proxy compares - and merges the appropriate <Proxy > sections. If the URI is nothing - more than a local (non-proxy) TRACE request, the core handles the - request and returns DONE. If no module answers this hook with OK or - DONE, the core will run the request filename against the <Directory > - and <Files > sections. If the request 'filename' isn't an absolute, - legal filename, a note is set for later termination.
+After the file or correct URI was determined, the + appropriate per-dir configurations are merged together. For + example, mod_proxy compares and merges the appropriate + <Proxy > sections. If the URI is nothing more than a + local (non-proxy) TRACE request, the core handles the request + and returns DONE. If no module answers this hook with OK or + DONE, the core will run the request filename against the + <Directory > and <Files > sections. If the request + 'filename' isn't an absolute, legal filename, a note is set for + later termination.
-Every request is hardened by a second ap_location_walk()
- call. This reassures that a translated request is still subjected to
- the configured <Location > sections. The request again borrows
- some or all of the processing from it's previous location_walk above,
- so this step is almost always very efficient unless the translated URI
- mapped to a substantially different path or Virtual Host.
Every request is hardened by a second
+ ap_location_walk() call. This reassures that a
+ translated request is still subjected to the configured
+ <Location > sections. The request again borrows some or
+ all of the processing from it's previous location_walk above,
+ so this step is almost always very efficient unless the
+ translated URI mapped to a substantially different path or
+ Virtual Host.
The main request then parses the client's headers. This prepares -the remaining request processing steps to better serve the client's -request.
+The main request then parses the client's headers. This + prepares the remaining request processing steps to better serve + the client's request.
-Needs Documentation. Code is;
+Needs Documentation. Code is;
switch (ap_satisfies(r)) {
case SATISFY_ALL:
@@ -128,13 +143,13 @@ request.
if (ap_some_auth_required(r)) {
if (((access_status = ap_run_check_user_id(r)) != 0) || !ap_auth_type(r)) {
return decl_die(access_status, ap_auth_type(r)
- ? "check user. No user file?"
- : "perform authentication. AuthType not set!", r);
+ ? "check user. No user file?"
+ : "perform authentication. AuthType not set!", r);
}
if (((access_status = ap_run_auth_checker(r)) != 0) || !ap_auth_type(r)) {
return decl_die(access_status, ap_auth_type(r)
- ? "check access. No groups file?"
- : "perform authentication. AuthType not set!", r);
+ ? "check access. No groups file?"
+ : "perform authentication. AuthType not set!", r);
}
}
break;
@@ -142,76 +157,75 @@ request.
if (((access_status = ap_run_access_checker(r)) != 0) || !ap_auth_type(r)) {
if (!ap_some_auth_required(r)) {
return decl_die(access_status, ap_auth_type(r)
- ? "check access"
- : "perform authentication. AuthType not set!", r);
+ ? "check access"
+ : "perform authentication. AuthType not set!", r);
}
if (((access_status = ap_run_check_user_id(r)) != 0) || !ap_auth_type(r)) {
return decl_die(access_status, ap_auth_type(r)
- ? "check user. No user file?"
- : "perform authentication. AuthType not set!", r);
+ ? "check user. No user file?"
+ : "perform authentication. AuthType not set!", r);
}
if (((access_status = ap_run_auth_checker(r)) != 0) || !ap_auth_type(r)) {
return decl_die(access_status, ap_auth_type(r)
- ? "check access. No groups file?"
- : "perform authentication. AuthType not set!", r);
+ ? "check access. No groups file?"
+ : "perform authentication. AuthType not set!", r);
+ }
}
- }
break;
}
-The modules have an opportunity to test the URI or filename against - the target resource, and set mime information for the request. Both - mod_mime and mod_mime_magic use this phase to compare the file name - or contents against the administrator's configuration and set the - content type, language, character set and request handler. Some - modules may set up their filters or other request handling parameters - at this time.
+If all modules DECLINE this phase, an error 500 is returned to the browser, - and a "couldn't find types" error is logged automatically.
+The modules have an opportunity to test the URI or filename + against the target resource, and set mime information for the + request. Both mod_mime and mod_mime_magic use this phase to + compare the file name or contents against the administrator's + configuration and set the content type, language, character set + and request handler. Some modules may set up their filters or + other request handling parameters at this time.
-If all modules DECLINE this phase, an error 500 is returned + to the browser, and a "couldn't find types" error is logged + automatically.
-Many modules are 'trounced' by some phase above. The fixups phase is - used by modules to 'reassert' their ownership or force the request's - fields to their appropriate values. It isn't always the cleanest - mechanism, but occasionally it's the only option.
+Many modules are 'trounced' by some phase above. The fixups + phase is used by modules to 'reassert' their ownership or force + the request's fields to their appropriate values. It isn't + always the cleanest mechanism, but occasionally it's the only + option.
-Modules that transform the content in some way can insert their values - and override existing filters, such that if the user configured a more - advanced filter out-of-order, then the module can move it's order as - need be. +
Modules that transform the content in some way can insert + their values and override existing filters, such that if the + user configured a more advanced filter out-of-order, then the + module can move it's order as need be.
-This phase is not part of the processing in
- ap_process_request_internal(). Many modules prepare one
- or more subrequests prior to creating any content at all. After the
- core, or a module calls ap_process_request_internal() it
- then calls ap_invoke_handler() to generate the request.
This phase is not part of the
+ processing in ap_process_request_internal(). Many
+ modules prepare one or more subrequests prior to creating any
+ content at all. After the core, or a module calls
+ ap_process_request_internal() it then calls
+ ap_invoke_handler() to generate the request.
The module finally has a chance to serve the request in it's handler - hook. Note that not every prepared request is sent to the handler - hook. Many modules, such as mod_autoindex, will create subrequests - for a given URI, and then never serve the subrequest, but simply - lists it for the user. Remember not to put required teardown from - the hooks above into this module, but register pool cleanups against - the request pool to free resources as required.
+The module finally has a chance to serve the request in it's + handler hook. Note that not every prepared request is sent to + the handler hook. Many modules, such as mod_autoindex, will + create subrequests for a given URI, and then never serve the + subrequest, but simply lists it for the user. Remember not to + put required teardown from the hooks above into this module, + but register pool cleanups against the request pool to free + resources as required.
+ + - - - -