Warning: This document has not been fully updated to take into account changes made in the 2.0 version of the Apache HTTP Server. Some of the information may still be relevant, but please use it with care.
Author: Dean Gaudet
| Related Modules mod_dir Multi-Processing module mod_status  | 
        Related Directives AllowOverride DirectoryIndex HostnameLookups EnableMMAP KeepAliveTimeout MaxSpareServers MinSpareServers Options (FollowSymLinks and FollowIfOwnerMatch) StartServers  | 
      
Apache 2.0 is a general-purpose webserver, designed to provide a balance of flexibility, portability, and performance. Although it has not been designed specifically to set benchmark records, Apache 2.0 is capable of high performance in many real-world situations.
Compared to Apache 1.3, release 2.0 contains many additional optimizations to increase throughput and scalability. Most of these improvements are enabled by default. However, there are compile-time and run-time configuration choices that can significantly affect performance. This document describes the options that a server administrator can configure to tune the performance of an Apache 2.0 installation. Some of these configuration options enable the httpd to better take advantage of the capabilities of the hardware and OS, while others allow the administrator to trade functionality for speed.
The single biggest hardware issue affecting webserver
    performance is RAM. A webserver should never ever have to swap,
    swapping increases the latency of each request beyond a point
    that users consider "fast enough". This causes users to hit
    stop and reload, further increasing the load. You can, and
    should, control the MaxClients setting so that
    your server does not spawn so many children it starts
    swapping.
Beyond that the rest is mundane: get a fast enough CPU, a fast enough network card, and fast enough disks, where "fast enough" is something that needs to be determined by experimentation.
Operating system choice is largely a matter of local concerns. But some guidelines that have proven generally useful are:
Prior to Apache 1.3, HostnameLookups defaulted
    to On. This adds latency to every request because it requires a
    DNS lookup to complete before the request is finished. In
    Apache 1.3 this setting defaults to Off. However (1.3 or
    later), if you use any Allow from domain or
    Deny from domain directives then you will pay for
    a double reverse DNS lookup (a reverse, followed by a forward
    to make sure that the reverse is not being spoofed). So for the
    highest performance avoid using these directives (it's fine to
    use IP addresses rather than domain names).
Note that it's possible to scope the directives, such as
    within a <Location /server-status> section.
    In this case the DNS lookups are only performed on requests
    matching the criteria. Here's an example which disables lookups
    except for .html and .cgi files:
HostnameLookups off
<Files ~ "\.(html|cgi)$">
    HostnameLookups on
</Files>
    
    But even still, if you just need DNS names in some CGIs you
    could consider doing the gethostbyname call in the
    specific CGIs that need it. 
    Similarly, if you need to have hostname information in your server logs in order to generate reports of this information, you can postprocess your log file with logresolve, so that these lookups can be done without making the client wait. It is recommended that you do this postprocessing, and any other statistical analysis of the log file, somewhere other than your production web server machine, in order that this activity does not adversely affect server performance.
Wherever in your URL-space you do not have an Options
    FollowSymLinks, or you do have an Options
    SymLinksIfOwnerMatch Apache will have to issue extra
    system calls to check up on symlinks. One extra call per
    filename component. For example, if you had:
DocumentRoot /www/htdocs
<Directory />
    Options SymLinksIfOwnerMatch
</Directory>
    
    and a request is made for the URI /index.html.
    Then Apache will perform lstat(2) on
    /www, /www/htdocs, and
    /www/htdocs/index.html. The results of these
    lstats are never cached, so they will occur on
    every single request. If you really desire the symlinks
    security checking you can do something like this: 
    
DocumentRoot /www/htdocs
<Directory />
    Options FollowSymLinks
</Directory>
<Directory /www/htdocs>
    Options -FollowSymLinks +SymLinksIfOwnerMatch
</Directory>
    
    This at least avoids the extra checks for the
    DocumentRoot path. Note that you'll need to add
    similar sections if you have any Alias or
    RewriteRule paths outside of your document root.
    For highest performance, and no symlink protection, set
    FollowSymLinks everywhere, and never set
    SymLinksIfOwnerMatch. 
    Wherever in your URL-space you allow overrides (typically
    .htaccess files) Apache will attempt to open
    .htaccess for each filename component. For
    example,
DocumentRoot /www/htdocs
<Directory />
    AllowOverride all
</Directory>
    
    and a request is made for the URI /index.html.
    Then Apache will attempt to open /.htaccess,
    /www/.htaccess, and
    /www/htdocs/.htaccess. The solutions are similar
    to the previous case of Options FollowSymLinks.
    For highest performance use AllowOverride None
    everywhere in your filesystem. 
    If at all possible, avoid content-negotiation if you're really interested in every last ounce of performance. In practice the benefits of negotiation outweigh the performance penalties. There's one case where you can speed up the server. Instead of using a wildcard such as:
Use a complete list of options:DirectoryIndex index
where you list the most common choice first.DirectoryIndex index.cgi index.pl index.shtml index.html
Also note that explicitly creating a type-map
    file provides better performance than using
    MultiViews, as the necessary information can be
    determined by reading this single file, rather than having to
    scan the directory for files.
In situations where Apache 2.0 needs to look at the contents of a file being delivered--for example, when doing server-side-include processing--it normally memory-maps the file if the OS supports some form of mmap(2).
On some platforms, this memory-mapping improves performance. However, there are cases where memory-mapping can hurt the performance or even the stability of the httpd:
For installations where either of these factors applies, you
    should use EnableMMAP off to disable the memory-mapping
    of delivered files.  (Note: This directive can be overridden on
    a per-directory basis.)
Prior to Apache 1.3 the MinSpareServers,
    MaxSpareServers, and StartServers
    settings all had drastic effects on benchmark results. In
    particular, Apache required a "ramp-up" period in order to
    reach a number of children sufficient to serve the load being
    applied. After the initial spawning of
    StartServers children, only one child per second
    would be created to satisfy the MinSpareServers
    setting. So a server being accessed by 100 simultaneous
    clients, using the default StartServers of 5 would
    take on the order 95 seconds to spawn enough children to handle
    the load. This works fine in practice on real-life servers,
    because they aren't restarted frequently. But does really
    poorly on benchmarks which might only run for ten minutes.
The one-per-second rule was implemented in an effort to
    avoid swamping the machine with the startup of new children. If
    the machine is busy spawning children it can't service
    requests. But it has such a drastic effect on the perceived
    performance of Apache that it had to be replaced. As of Apache
    1.3, the code will relax the one-per-second rule. It will spawn
    one, wait a second, then spawn two, wait a second, then spawn
    four, and it will continue exponentially until it is spawning
    32 children per second. It will stop whenever it satisfies the
    MinSpareServers setting.
This appears to be responsive enough that it's almost
    unnecessary to twiddle the MinSpareServers,
    MaxSpareServers and StartServers
    knobs. When more than 4 children are spawned per second, a
    message will be emitted to the ErrorLog. If you
    see a lot of these errors then consider tuning these settings.
    Use the mod_status output as a guide.
Related to process creation is process death induced by the
    MaxRequestsPerChild setting. By default this is 0,
    which means that there is no limit to the number of requests
    handled per child. If your configuration currently has this set
    to some very low number, such as 30, you may want to bump this
    up significantly. If you are running SunOS or an old version of
    Solaris, limit this to 10000 or so because of memory leaks.
When keep-alives are in use, children will be kept busy
    doing nothing waiting for more requests on the already open
    connection. The default KeepAliveTimeout of 15
    seconds attempts to minimize this effect. The tradeoff here is
    between network bandwidth and server resources. In no event
    should you raise this above about 60 seconds, as 
    most of the benefits are lost.
If you include mod_status and you also set
    ExtendedStatus On when building and running
    Apache, then on every request Apache will perform two calls to
    gettimeofday(2) (or times(2)
    depending on your operating system), and (pre-1.3) several
    extra calls to time(2). This is all done so that
    the status report contains timing indications. For highest
    performance, set ExtendedStatus off (which is the
    default).
This discusses a shortcoming in the Unix socket API. Suppose
    your web server uses multiple Listen statements to
    listen on either multiple ports or multiple addresses. In order
    to test each socket to see if a connection is ready Apache uses
    select(2). select(2) indicates that a
    socket has zero or at least one connection
    waiting on it. Apache's model includes multiple children, and
    all the idle ones test for new connections at the same time. A
    naive implementation looks something like this (these examples
    do not match the code, they're contrived for pedagogical
    purposes):
    for (;;) {
    for (;;) {
        fd_set accept_fds;
        FD_ZERO (&accept_fds);
        for (i = first_socket; i <= last_socket; ++i) {
        FD_SET (i, &accept_fds);
        }
        rc = select (last_socket+1, &accept_fds, NULL, NULL, NULL);
        if (rc < 1) continue;
        new_connection = -1;
        for (i = first_socket; i <= last_socket; ++i) {
        if (FD_ISSET (i, &accept_fds)) {
            new_connection = accept (i, NULL, NULL);
            if (new_connection != -1) break;
        }
        }
        if (new_connection != -1) break;
    }
    process the new_connection;
    }
    
    But this naive implementation has a serious starvation problem.
    Recall that multiple children execute this loop at the same
    time, and so multiple children will block at
    select when they are in between requests. All
    those blocked children will awaken and return from
    select when a single request appears on any socket
    (the number of children which awaken varies depending on the
    operating system and timing issues). They will all then fall
    down into the loop and try to accept the
    connection. But only one will succeed (assuming there's still
    only one connection ready), the rest will be blocked
    in accept. This effectively locks those children
    into serving requests from that one socket and no other
    sockets, and they'll be stuck there until enough new requests
    appear on that socket to wake them all up. This starvation
    problem was first documented in PR#467. There
    are at least two solutions. 
    One solution is to make the sockets non-blocking. In this
    case the accept won't block the children, and they
    will be allowed to continue immediately. But this wastes CPU
    time. Suppose you have ten idle children in
    select, and one connection arrives. Then nine of
    those children will wake up, try to accept the
    connection, fail, and loop back into select,
    accomplishing nothing. Meanwhile none of those children are
    servicing requests that occurred on other sockets until they
    get back up to the select again. Overall this
    solution does not seem very fruitful unless you have as many
    idle CPUs (in a multiprocessor box) as you have idle children,
    not a very likely situation.
Another solution, the one used by Apache, is to serialize entry into the inner loop. The loop looks like this (differences highlighted):
    for (;;) {
    accept_mutex_on ();
    for (;;) {
        fd_set accept_fds;
        FD_ZERO (&accept_fds);
        for (i = first_socket; i <= last_socket; ++i) {
        FD_SET (i, &accept_fds);
        }
        rc = select (last_socket+1, &accept_fds, NULL, NULL, NULL);
        if (rc < 1) continue;
        new_connection = -1;
        for (i = first_socket; i <= last_socket; ++i) {
        if (FD_ISSET (i, &accept_fds)) {
            new_connection = accept (i, NULL, NULL);
            if (new_connection != -1) break;
        }
        }
        if (new_connection != -1) break;
    }
    accept_mutex_off ();
    process the new_connection;
    }
    
    The functions
    accept_mutex_on and accept_mutex_off
    implement a mutual exclusion semaphore. Only one child can have
    the mutex at any time. There are several choices for
    implementing these mutexes. The choice is defined in
    src/conf.h (pre-1.3) or
    src/include/ap_config.h (1.3 or later). Some
    architectures do not have any locking choice made, on these
    architectures it is unsafe to use multiple Listen
    directives. 
    USE_FLOCK_SERIALIZED_ACCEPTflock(2) system call to
      lock a lock file (located by the LockFile
      directive).USE_FCNTL_SERIALIZED_ACCEPTfcntl(2) system call to
      lock a lock file (located by the LockFile
      directive).USE_SYSVSEM_SERIALIZED_ACCEPTipcs(8) man page). The other is that the
      semaphore API allows for a denial of service attack by any
      CGIs running under the same uid as the webserver
      (i.e., all CGIs, unless you use something like
      suexec or cgiwrapper). For these reasons this method is not
      used on any architecture except IRIX (where the previous two
      are prohibitively expensive on most IRIX boxes).USE_USLOCK_SERIALIZED_ACCEPTusconfig(2) to create a mutex. While this
      method avoids the hassles of SysV-style semaphores, it is not
      the default for IRIX. This is because on single processor
      IRIX boxes (5.3 or 6.2) the uslock code is two orders of
      magnitude slower than the SysV-semaphore code. On
      multi-processor IRIX boxes the uslock code is an order of
      magnitude faster than the SysV-semaphore code. Kind of a
      messed up situation. So if you're using a multiprocessor IRIX
      box then you should rebuild your webserver with
      -DUSE_USLOCK_SERIALIZED_ACCEPT on the
      EXTRA_CFLAGS.USE_PTHREAD_SERIALIZED_ACCEPTIf your system has another method of serialization which isn't in the above list then it may be worthwhile adding code for it (and submitting a patch back to Apache).
Another solution that has been considered but never implemented is to partially serialize the loop -- that is, let in a certain number of processes. This would only be of interest on multiprocessor boxes where it's possible multiple children could run simultaneously, and the serialization actually doesn't take advantage of the full bandwidth. This is a possible area of future investigation, but priority remains low because highly parallel web servers are not the norm.
Ideally you should run servers without multiple
    Listen statements if you want the highest
    performance. But read on.
The above is fine and dandy for multiple socket servers, but
    what about single socket servers? In theory they shouldn't
    experience any of these same problems because all children can
    just block in accept(2) until a connection
    arrives, and no starvation results. In practice this hides
    almost the same "spinning" behaviour discussed above in the
    non-blocking solution. The way that most TCP stacks are
    implemented, the kernel actually wakes up all processes blocked
    in accept when a single connection arrives. One of
    those processes gets the connection and returns to user-space,
    the rest spin in the kernel and go back to sleep when they
    discover there's no connection for them. This spinning is
    hidden from the user-land code, but it's there nonetheless.
    This can result in the same load-spiking wasteful behaviour
    that a non-blocking solution to the multiple sockets case
    can.
For this reason we have found that many architectures behave
    more "nicely" if we serialize even the single socket case. So
    this is actually the default in almost all cases. Crude
    experiments under Linux (2.0.30 on a dual Pentium pro 166
    w/128Mb RAM) have shown that the serialization of the single
    socket case causes less than a 3% decrease in requests per
    second over unserialized single-socket. But unserialized
    single-socket showed an extra 100ms latency on each request.
    This latency is probably a wash on long haul lines, and only an
    issue on LANs. If you want to override the single socket
    serialization you can define
    SINGLE_LISTEN_UNSERIALIZED_ACCEPT and then
    single-socket servers will not serialize at all.
As discussed in draft-ietf-http-connection-00.txt section 8, in order for an HTTP server to reliably implement the protocol it needs to shutdown each direction of the communication independently (recall that a TCP connection is bi-directional, each half is independent of the other). This fact is often overlooked by other servers, but is correctly implemented in Apache as of 1.2.
When this feature was added to Apache it caused a flurry of problems on various versions of Unix because of a shortsightedness. The TCP specification does not state that the FIN_WAIT_2 state has a timeout, but it doesn't prohibit it. On systems without the timeout, Apache 1.2 induces many sockets stuck forever in the FIN_WAIT_2 state. In many cases this can be avoided by simply upgrading to the latest TCP/IP patches supplied by the vendor. In cases where the vendor has never released patches (i.e., SunOS4 -- although folks with a source license can patch it themselves) we have decided to disable this feature.
There are two ways of accomplishing this. One is the socket
    option SO_LINGER. But as fate would have it, this
    has never been implemented properly in most TCP/IP stacks. Even
    on those stacks with a proper implementation (i.e.,
    Linux 2.0.31) this method proves to be more expensive (cputime)
    than the next solution.
For the most part, Apache implements this in a function
    called lingering_close (in
    http_main.c). The function looks roughly like
    this:
    void lingering_close (int s)
    {
    char junk_buffer[2048];
    /* shutdown the sending side */
    shutdown (s, 1);
    signal (SIGALRM, lingering_death);
    alarm (30);
    for (;;) {
        select (s for reading, 2 second timeout);
        if (error) break;
        if (s is ready for reading) {
        if (read (s, junk_buffer, sizeof (junk_buffer)) <= 0) {
            break;
        }
        /* just toss away whatever is here */
        }
    }
    close (s);
    }
    
    This naturally adds some expense at the end of a connection,
    but it is required for a reliable implementation. As HTTP/1.1
    becomes more prevalent, and all connections are persistent,
    this expense will be amortized over more requests. If you want
    to play with fire and disable this feature you can define
    NO_LINGCLOSE, but this is not recommended at all.
    In particular, as HTTP/1.1 pipelined persistent connections
    come into use lingering_close is an absolute
    necessity (and 
    pipelined connections are faster, so you want to support
    them). 
    Apache's parent and children communicate with each other
    through something called the scoreboard. Ideally this should be
    implemented in shared memory. For those operating systems that
    we either have access to, or have been given detailed ports
    for, it typically is implemented using shared memory. The rest
    default to using an on-disk file. The on-disk file is not only
    slow, but it is unreliable (and less featured). Peruse the
    src/main/conf.h file for your architecture and
    look for either USE_MMAP_SCOREBOARD or
    USE_SHMGET_SCOREBOARD. Defining one of those two
    (as well as their companions HAVE_MMAP and
    HAVE_SHMGET respectively) enables the supplied
    shared memory code. If your system has another type of shared
    memory, edit the file src/main/http_main.c and add
    the hooks necessary to use it in Apache. (Send us back a patch
    too please.)
Historical note: The Linux port of Apache didn't start to use shared memory until version 1.2 of Apache. This oversight resulted in really poor and unreliable behaviour of earlier versions of Apache on Linux.
DYNAMIC_MODULE_LIMITIf you have no intention of using dynamically loaded modules
    (you probably don't if you're reading this and tuning your
    server for every last ounce of performance) then you should add
    -DDYNAMIC_MODULE_LIMIT=0 when building your
    server. This will save RAM that's allocated only for supporting
    dynamically loaded modules.
Here is a system call trace of Apache 2.0.38 with the worker MPM on Solaris 8. This trace was collected using:
truss -l -p httpd_child_pid.
    The -l option tells truss to log the ID of the
    LWP (lightweight process--Solaris's form of kernel-level thread)
    that invokes each system call.
Other systems may have different system call tracing utilities
    such as strace, ktrace, or par.
    They all produce similar output.
In this trace, a client has requested a 10KB static file from the httpd. Traces of non-static requests or requests with content negotiation look wildly different (and quite ugly in some cases).
/67: accept(3, 0x00200BEC, 0x00200C0C, 1) (sleeping...) /67: accept(3, 0x00200BEC, 0x00200C0C, 1) = 9
In this trace, the listener thread is running within LWP #67.
Note the lack of accept(2) serialization. On this particular platform, the worker MPM uses an unserialized accept by default unless it is listening on multiple ports.
/65: lwp_park(0x00000000, 0) = 0 /67: lwp_unpark(65, 1) = 0
Upon accepting the connection, the listener thread wakes up a worker thread to do the request processing. In this trace, the worker thread that handles the request is mapped to LWP #65.
/65: getsockname(9, 0x00200BA4, 0x00200BC4, 1) = 0
In order to implement virtual hosts, Apache needs to know the local socket address used to accept the connection. It is possible to eliminate this call in many situations (such as when there are no virtual hosts, or when
Listendirectives are used which do not have wildcard addresses). But no effort has yet been made to do these optimizations.
/65: brk(0x002170E8) = 0 /65: brk(0x002190E8) = 0
The brk(2) calls allocate memory from the heap. It is rare to see these in a system call trace, because the httpd uses custom memory allocators (
apr_poolandapr_bucket_alloc) for most request processing. In this trace, the httpd has just been started, so it must call malloc(3) to get the blocks of raw memory with which to create the custom memory allocators.
/65: fcntl(9, F_GETFL, 0x00000000) = 2 /65: fstat64(9, 0xFAF7B818) = 0 /65: getsockopt(9, 65535, 8192, 0xFAF7B918, 0xFAF7B910, 2190656) = 0 /65: fstat64(9, 0xFAF7B818) = 0 /65: getsockopt(9, 65535, 8192, 0xFAF7B918, 0xFAF7B914, 2190656) = 0 /65: setsockopt(9, 65535, 8192, 0xFAF7B918, 4, 2190656) = 0 /65: fcntl(9, F_SETFL, 0x00000082) = 0
Next, the worker thread puts the connection to the client (file descriptor 9) in non-blocking mode. The setsockopt(2) and getsockopt(2) calls are a side-effect of how Solaris's libc handles fcntl(2) on sockets.
/65: read(9, " G E T / 1 0 k . h t m".., 8000) = 97
The worker thread reads the request from the client.
/65:    stat("/var/httpd/apache/httpd-8999/htdocs/10k.html", 0xFAF7B978) = 0
/65:    open("/var/httpd/apache/httpd-8999/htdocs/10k.html", O_RDONLY) = 10
This httpd has been configured with
Options FollowSymLinksandAllowOverride None. Thus it doesn't need to lstat(2) each directory in the path leading up to the requested file, nor check for.htaccessfiles. It simply calls stat(2) to verify that the file: 1) exists, and 2) is a regular file, not a directory.
/65: sendfilev(0, 9, 0x00200F90, 2, 0xFAF7B53C) = 10269
In this example, the httpd is able to send the HTTP response header and the requested file with a single sendfilev(2) system call. Sendfile semantics vary among operating systems. On some other systems, it is necessary to do a write(2) or writev(2) call to send the headers before calling sendfile(2).
/65: write(4, " 1 2 7 . 0 . 0 . 1 - ".., 78) = 78
This write(2) call records the request in the access log. Note that one thing missing from this trace is a time(2) call. Unlike Apache 1.3, Apache 2.0 uses gettimeofday(3) to look up the time. On some operating systems, like Linux or Solaris, gettimeofday has an optimized implementation that doesn't require as much overhead as a typical system call.
/65: shutdown(9, 1, 1) = 0 /65: poll(0xFAF7B980, 1, 2000) = 1 /65: read(9, 0xFAF7BC20, 512) = 0 /65: close(9) = 0
The worker thread does a lingering close of the connection.
/65: close(10) = 0 /65: lwp_park(0x00000000, 0) (sleeping...)
Finally the worker thread closes the file that it has just delivered and blocks until the listener assigns it another connection.
/67: accept(3, 0x001FEB74, 0x001FEB94, 1) (sleeping...)
Meanwhile, the listener thread is able to accept another connection as soon as it has dispatched this connection to a worker thread (subject to some flow-control logic in the worker MPM that throttles the listener if all the available workers are busy). Though it isn't apparent from this trace, the next accept(2) can (and usually does, under high load conditions) occur in parallel with the worker thread's handling of the just-accepted connection.