Reverse proxy resource usage – httpd 2.2.15

Recently I had reason to find out what happens when your httpd config includes a LOT of ProxyPass statements. The particular use case involved an application farm with one VirtualHost per domain for hundreds of domains, each very similar (though not identical) with dozens of ProxyPass statements to shared backend apps. But just a few dozen domains resulted in very high memory and CPU use.

I set up a simple test rig (CentOS 5 VM,1GB RAM, single x86 CPU, ERS 4.0.2 httpd 2.2.15) and ran some unscientific tests where the config included 20,000 ProxyPass statements with these variables:

  1. Unique vs. repeated – unique statements each proxied to a unique destination, while repeated ones proxied different locations to the same destination.
  2. Balancer – the statements either proxy directly to the URL, or use a pre-defined balancer:// address.
  3. Vhost – the ProxyPass statements were either all in the main_server definition or inside vhosts, one per.
  4. MPM – either prefork or worker MPM is used.

No actual load was applied to the server – I just wanted to see what it took to read the config and start up. Settings were defaults per MPM (5 children for prefork, 3 children for  worker) – obviously you’d tune these depending on usage. I tried to wait for things to settle down a bit after startup before reading “top” sorted by memory usage.

I also tried some other methods for accomplishing this task to see what the memory footprint would be.

Conclusions

The test descriptions and raw data are below. But first, I’ll summarize what I noticed.

  • Individual ProxyPass directives each create a proxy worker. 20K proxy workers take a lot of RAM – way more than 20K vhosts.
  • Duplicate proxy workers (same destination) are appropriately thrown away (after warning) and don’t require more RAM than doing the same thing against a shared balancer:// worker (so implicitly, that’s what’s happening – they share a worker).
  • When ProxyPass is used inside many vhosts, the benefit of shared workers is not as significant. Using a balancer:// still saves perhaps half of the RAM used by plain ProxyPass directives, but RAM usage per-process is more expensive than I would expect by just adding the RAM that a bunch of ProxyPass directives use and the RAM used to define empty vhosts.
  • If a ProxyPass statement is relevant for all vhosts (or almost all), you can put it in the main_server config and it will be inherited by all vhosts at no extra cost. Caveats: a main_server ProxyPass can’t be overridden by a ProxyPass in the vhost (they’re merged in after; so e.g. ProxyPass ! in the vhost won’t stop a ProxyPass in the main_server). However ProxyPass can be pre-empted by RewriteRules (mod_rewrite hooks operate before mod_proxy hooks).
  • Using a RewriteRule [P] directive to proxy instead of a ProxyPass requires virtually no memory. I suspect there may be a performance penalty, though, if it isn’t pointed at a predefined balancer:// worker.
  • Worker processes are only a little larger than prefork processes. Of course, each one handles 25x as many requests at a time (or as many as you configure with ThreadsPerChild), so that’s pretty significant when your processes are taking up this much RAM.

Test data

Platform:

CentOS 5 VM

1GB RAM

single 32-bit x86 CPU

ERS 4.0.2 httpd 2.2.15

Simple Unique ProxyPasses

I simply create 20K statements like the following:

ProxyPass /000001 http://www.google.com/000001
ProxyPass /000002 http://www.google.com/000002
ProxyPass /000003 http://www.google.com/000003

Prefork startup time: 55 secs

Prefork “top” usage:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
28995 luke      18   0  515m 259m  14m S  0.0 25.7   0:01.41 httpsd.prefork
28993 luke      18   0  515m 259m  14m S  0.0 25.6   0:01.10 httpsd.prefork
28994 luke      18   0  515m 218m  14m S  0.0 21.6   0:01.10 httpsd.prefork
28992 luke      18   0  515m 155m  14m S  0.0 15.4   0:01.00 httpsd.prefork
28991 luke      18   0  515m 100m  14m S  0.0 10.0   0:00.95 httpsd.prefork
28936 luke      18   0  206m  36m  14m R  0.0  3.6   0:31.26 httpsd.prefork

Worker startup time: 54 secs

Worker “top” usage:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
30835 luke      25   0  476m 134m  20m S  0.0 13.3   0:00.44 httpsd.worker
30834 luke      25   0  476m 134m  20m S  0.0 13.3   0:00.45 httpsd.worker
30778 luke      18   0  206m 130m  20m S  0.0 12.9   0:31.94 httpsd.worker
30833 luke      25   0  186m 110m  476 S  0.0 10.9   0:00.00 httpsd.worker

Unique ProxyPass to balancer

This is similar to the previous, but I used a balancer instead of specifying the URL directly:

<Proxy balancer://goo>
 BalancerMember http://www.google.com
</Proxy>
ProxyPass /000001 balancer://goo/000001
ProxyPass /000002 balancer://goo/000002
ProxyPass /000003 balancer://goo/000003

Prefork startup time: 2 secs

Prefork “top” usage:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
32018 luke      18   0 16932  12m 1300 S  0.0  1.3   0:00.08 httpsd.prefork
32020 luke      22   0 16932  12m  500 S  0.0  1.2   0:00.00 httpsd.prefork
32021 luke      22   0 16932  12m  500 S  0.0  1.2   0:00.00 httpsd.prefork
32022 luke      25   0 16932  12m  500 S  0.0  1.2   0:00.00 httpsd.prefork
32023 luke      25   0 16932  12m  500 S  0.0  1.2   0:00.00 httpsd.prefork
32024 luke      25   0 16932  12m  500 S  0.0  1.2   0:00.00 httpsd.prefork

Worker startup time: 3 secs

Worker “top” usage:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
32069 luke      18   0 17100  13m 1472 S  0.0  1.3   0:00.10 httpsd.worker
32074 luke      25   0  286m  12m  924 S  0.0  1.3   0:00.00 httpsd.worker
32071 luke      25   0  286m  12m  920 S  0.0  1.3   0:00.00 httpsd.worker
32070 luke      23   0 16992  12m  492 S  0.0  1.2   0:00.00 httpsd.worker

Comment: Sharing a balancer saved a lot of RAM here.

Simple ProxyPass to repeated destination

This time the destination for the proxy is the same each time.

ProxyPass /000001/ http://www.google.com/
ProxyPass /000002/ http://www.google.com/
ProxyPass /000003/ http://www.google.com/

Lots of warnings are issued to this effect:

[warn] worker http://www.google.com/ already used by another worker

Prefork startup time: 3 secs

Prefork “top” usage:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
29780 luke      18   0 17372  13m 1264 S  0.0  1.3   0:00.16 httpsd.prefork
29782 luke      23   0 17372  12m  500 S  0.0  1.3   0:00.00 httpsd.prefork
29783 luke      23   0 17372  12m  500 S  0.0  1.3   0:00.00 httpsd.prefork
29784 luke      23   0 17372  12m  500 S  0.0  1.3   0:00.00 httpsd.prefork
29785 luke      25   0 17372  12m  500 S  0.0  1.3   0:00.00 httpsd.prefork
29786 luke      25   0 17372  12m  500 S  0.0  1.3   0:00.00 httpsd.prefork

Worker startup time: 3 secs

Worker “top” usage:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
31112 luke      18   0 17556  13m 1436 S  0.0  1.4   0:00.15 httpsd.worker
31117 luke      25   0  287m  13m  924 S  0.0  1.3   0:00.00 httpsd.worker
31115 luke      22   0  287m  13m  920 S  0.0  1.3   0:00.00 httpsd.worker
31114 luke      22   0 17448  12m  492 S  0.0  1.3   0:00.00 httpsd.worker

Comment: Memory numbers look the same as with sharing a balancer – conclusion is that the duplicates are sharing a worker.

ProxyPass to same destination, with balancer

Same as the above, however I used the existing balancer. No real difference.

ProxyPass /000001/ balancer://goo/
ProxyPass /000002/ balancer://goo/
ProxyPass /000003/ balancer://goo/

Prefork startup time: 2 secs

Prefork “top” usage:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
29858 luke      18   0 16932  12m 1300 S  0.0  1.3   0:00.07 httpsd.prefork
29860 luke      23   0 16932  12m  500 S  0.0  1.2   0:00.00 httpsd.prefork
29861 luke      23   0 16932  12m  500 S  0.0  1.2   0:00.00 httpsd.prefork
29862 luke      25   0 16932  12m  500 S  0.0  1.2   0:00.00 httpsd.prefork
29863 luke      25   0 16932  12m  500 S  0.0  1.2   0:00.00 httpsd.prefork
29864 luke      25   0 16932  12m  500 S  0.0  1.2   0:00.00 httpsd.prefork

Worker startup time: 3 secs

Worker “top” usage:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
31214 luke      18   0 17100  13m 1472 S  0.0  1.3   0:00.07 httpsd.worker
31219 luke      25   0  286m  12m  924 S  0.0  1.3   0:00.00 httpsd.worker
31217 luke      21   0  286m  12m  920 S  0.0  1.3   0:00.00 httpsd.worker
31216 luke      21   0 16992  12m  492 S  0.0  1.2   0:00.00 httpsd.worker

Unique ProxyPass inside vhost

For this (and the next three) I just did the same thing as before, but also created a vhost around each ProxyPass.

<VirtualHost *>
 ServerName 000001.google.com
 ProxyPass /000001/ http://www.google.com/000001/
</VirtualHost>
<VirtualHost *>
 ServerName 000002.google.com
 ProxyPass /000002/ http://www.google.com/000002/
</VirtualHost>

The memory numbers below reflect that the system was swapping trying to find memory for these processes (note the RES numbers are all over) – putting any actual load on this system would be disastrous.

Prefork startup time: 7 secs

Prefork “top” usage:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
29972 luke      18   0  659m 334m  14m R  1.5 33.1   0:02.21 httpsd.prefork
29979 luke      18   0  659m 216m  14m R  1.7 21.4   0:02.13 httpsd.prefork
29978 luke      18   0  659m 215m  14m D  1.3 21.3   0:02.07 httpsd.prefork
29971 luke      18   0  659m 149m  14m R  1.3 14.8   0:02.04 httpsd.prefork
29970 luke      18   0  659m 140m  14m D  1.3 13.9   0:02.17 httpsd.prefork
29968 luke      18   0  413m  68m  10m S  0.0  6.8   0:01.36 httpsd.prefork

Worker startup time: 4 secs

Worker “top” usage:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
31321 luke      25   0  683m 381m  20m S  0.0 37.8   0:01.06 httpsd.worker
31320 luke      25   0  683m 381m  20m S  0.0 37.8   0:01.05 httpsd.worker
31317 luke      18   0  413m 380m  20m S  0.0 37.6   0:01.09 httpsd.worker
31319 luke      25   0  393m 360m  488 S  0.0 35.6   0:00.00 httpsd.worker

Unique ProxyPass to balancer inside vhost

<VirtualHost *>
 ServerName 000001.google.com
 ProxyPass /000001/ balancer://goo/000001/
</VirtualHost>
<VirtualHost *>
 ServerName 000002.google.com
 ProxyPass /000002/ balancer://goo/000002/
</VirtualHost>

Prefork startup time: 5 secs

Prefork “top” usage:

PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
30060 luke      18   0  248m 244m 1332 S  0.0 24.2   0:00.76 httpsd.prefork
30062 luke      25   0  248m 244m  500 S  0.0 24.1   0:00.14 httpsd.prefork
30063 luke      25   0  248m 244m  500 S  0.0 24.1   0:00.14 httpsd.prefork
30064 luke      25   0  248m 244m  500 S  0.0 24.1   0:00.09 httpsd.prefork
30065 luke      25   0  248m 244m  500 S  0.0 24.1   0:00.09 httpsd.prefork
30066 luke      25   0  248m 244m  500 S  0.0 24.1   0:00.10 httpsd.prefork

Worker startup time: 4 secs

Worker “top” usage:

PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
31419 luke      18   0  248m 244m 1504 S  0.0 24.2   0:00.72 httpsd.worker
31423 luke      25   0  518m 244m  924 S  0.0 24.2   0:00.10 httpsd.worker
31422 luke      25   0  518m 244m  920 S  0.0 24.2   0:00.14 httpsd.worker
31421 luke      25   0  248m 244m  496 S  0.0 24.1   0:00.00 httpsd.worker

Comment: Routing to the balancer really didn’t bring down RAM usage like I expected, although it’s much better than not using it. This suggests that the ProxyPass statements themselves are taking up a lot of room here, or that each vhost attached to a worker with a ProxyPass adds significantly to its footprint.

ProxyPass to repeated destination inside vhost

<VirtualHost *>
 ServerName 000001.google.com
 ProxyPass /000001/ http://www.google.com/
</VirtualHost>
<VirtualHost *>
 ServerName 000002.google.com
 ProxyPass /000002/ http://www.google.com/
</VirtualHost>

Prefork startup time: 5  secs

Prefork “top” usage:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
30240 luke      18   0  692m 320m  17m D  2.0 31.7   0:01.87 httpsd.prefork
30239 luke      18   0  692m 231m  17m D  2.2 22.9   0:02.06 httpsd.prefork
30238 luke      18   0  692m 193m  17m D  1.5 19.1   0:01.48 httpsd.prefork
30237 luke      18   0  682m 156m  16m D  2.0 15.5   0:01.52 httpsd.prefork
30241 luke      18   0  511m 102m 6196 D  1.1 10.2   0:00.88 httpsd.prefork
30235 luke      18   0  409m  48m  12m S  0.0  4.8   0:01.09 httpsd.prefork

Worker startup time: 4 secs

Worker “top” usage:

PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
31548 luke      25   0  681m 378m  20m S  0.0 37.4   0:00.63 httpsd.worker
31549 luke      25   0  680m 378m  20m S  0.0 37.4   0:00.73 httpsd.worker
31545 luke      18   0  410m 377m  20m S  0.0 37.3   0:00.93 httpsd.worker
31547 luke      25   0  390m 356m  496 S  0.0 35.3   0:00.00 httpsd.worker

Comment: Given each is inside a vhost, the proxy workers don’t overlap and are created individually, so this looks no different from the unique destination case.

ProxyPass to repeated balancer inside vhost

<VirtualHost *>
 ServerName 000001.google.com
 ProxyPass /000001/ balancer://goo/
</VirtualHost>
<VirtualHost *>
 ServerName 000002.google.com
 ProxyPass /000002/ balancer://goo/
</VirtualHost>

Prefork startup time: 4 secs

Prefork “top” usage:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
30341 luke      18   0  248m 244m 1332 S  0.0 24.2   0:00.76 httpsd.prefork
30343 luke      25   0  248m 243m  500 S  0.0 24.1   0:00.14 httpsd.prefork
30344 luke      25   0  248m 243m  500 S  0.0 24.1   0:00.15 httpsd.prefork
30345 luke      25   0  248m 243m  500 S  0.0 24.1   0:00.10 httpsd.prefork
30346 luke      25   0  248m 243m  500 S  0.0 24.1   0:00.09 httpsd.prefork
30347 luke      25   0  248m 243m  500 S  0.0 24.1   0:00.10 httpsd.prefork

Worker startup time: 3 secs

Worker “top” usage:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
31646 luke      18   0  248m 244m 1504 S  0.0 24.2   0:00.69 httpsd.worker
31650 luke      25   0  518m 244m  924 S  0.0 24.2   0:00.10 httpsd.worker
31649 luke      25   0  518m 244m  920 S  0.0 24.2   0:00.15 httpsd.worker
31648 luke      25   0  248m 244m  496 S  0.0 24.1   0:00.00 httpsd.worker

Comment: Again, using a balancer helped, but not as much as expected.

Alternative: Inheriting ProxyPass inside vhost

This is a very different setup, but closer to what’s needed for the original use case; here, ProxyPass directives are specified in the main_server config and silently inherited by all vhosts. So there are only a few actual ProxyPass directives (I just made 10), but all are applied to all vhosts so in effect there are 10 times as many here as in any of the other scenarios.

<Location /000001 >
 ProxyPass http://www.google.com/000001
</Location>
<Location /000002 >
 ProxyPass http://www.google.com/000002
</Location>
...
<VirtualHost *>
 ServerName 000001.google.com
</VirtualHost>
<VirtualHost *>
 ServerName 000002.google.com
</VirtualHost>

Prefork startup time: 4 secs

Prefork “top” usage:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 3040 luke      18   0 96068  90m 1332 S  0.0  8.9   0:00.62 httpsd.prefork
 3042 luke      25   0 96200  89m  516 S  0.0  8.9   0:00.02 httpsd.prefork
 3043 luke      25   0 96200  89m  516 S  0.0  8.9   0:00.03 httpsd.prefork
 3044 luke      25   0 96200  89m  516 S  0.0  8.9   0:00.02 httpsd.prefork
 3045 luke      25   0 96200  89m  516 S  0.0  8.9   0:00.02 httpsd.prefork
 3046 luke      25   0 96200  89m  516 S  0.0  8.9   0:00.02 httpsd.prefork

Worker startup time: 3 secs

Worker “top” usage:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 3160 luke      18   0 96244  90m 1504 S  0.0  9.0   0:00.49 httpsd.worker
 3163 luke      25   0  364m  90m  936 S  0.0  8.9   0:00.02 httpsd.worker
 3164 luke      25   0  364m  90m  940 S  0.0  8.9   0:00.03 httpsd.worker
 3162 luke      25   0 96128  89m  496 S  0.0  8.9   0:00.00 httpsd.worker

Comment: Memory usage here is basically the same as if there were no ProxyPass statements and I just defined 20K vhosts. That size would presumably vary depending on how much configuration they’re inheriting from main_server – I didn’t do anything to try to minimize that, just used stock config.

This is definitely the cheapest memory usage for a vhost farm where all the paths are proxied to the same place. The downside is you can’t override defined paths with ProxyPass in a specific vhost (as the config merge puts them after the ProxyPass statements in the main_server) but you CAN override them with RewriteRule [P] directives which take precedence.

Alternative: 20K RewriteRule [P] directives

Here’s an alternative way using RewriteRule to get the same effect as the original ProxyPass statements:

RewriteEngine on
RewriteRule ^/000001/(.*) http://www.google.com/000001/$1 [P]
RewriteRule ^/000002/(.*) http://www.google.com/000002/$1 [P]
RewriteRule ^/000003/(.*) http://www.google.com/000003/$1 [P]

Prefork startup time: 3 secs

Prefork “top” usage:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
27700 luke      18   0 18176  13m 1304 R  0.0  1.3   0:00.05 httpsd.prefork
27702 luke      23   0 18176  12m  500 S  0.0  1.2   0:00.00 httpsd.prefork
27703 luke      23   0 18176  12m  500 S  0.0  1.2   0:00.00 httpsd.prefork
27704 luke      23   0 18176  12m  500 S  0.0  1.2   0:00.00 httpsd.prefork
27705 luke      23   0 18176  12m  500 S  0.0  1.2   0:00.00 httpsd.prefork
27706 luke      25   0 18176  12m  500 S  0.0  1.2   0:00.00 httpsd.prefork

Comment: (Worker looks similar) Look ma, no memory usage! However, a caveat: I’m not sure exactly how this is implemented, but I would guess that no workers are predefined. I suspect that proxy workers are probably created and destroyed on the fly in order to fulfill proxy requests. This could be pretty nasty for performance. So why not pre-empt that problem…

Alternative: 20K RewriteRule [P] directives with a balancer

Define a balancer and use that with the RewriteRules so they don’t create workers on the fly:

RewriteEngine on
<Proxy balancer://goo>
 BalancerMember http://www.google.com
</Proxy>
RewriteRule ^/000001/(.*) balancer://goo/000001/$1 [P]
RewriteRule ^/000002/(.*) balancer://goo/000002/$1 [P]
RewriteRule ^/000003/(.*) balancer://goo/000003/$1 [P]

Memory usage looks the same; but hopefully performance is better.

Alternative: 20K RewriteRules inside vhosts

Now let’s take the same thing and wrap them in vhosts:

<Proxy balancer://goo>
 BalancerMember http://www.google.com
</Proxy>
<VirtualHost *>
 ServerName 000001.google.com
 RewriteEngine on
 RewriteRule ^/000001/(.*) balancer://goo/000001/$1 [P]
</VirtualHost>
<VirtualHost *>
 ServerName 000002.google.com
 RewriteEngine on
 RewriteRule ^/000002/(.*) balancer://goo/000002/$1 [P]
</VirtualHost>

Prefork startup time: 4 secs

Prefork “top” usage:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
29031 luke      18   0  121m 117m 1376 S  0.0 11.6   0:00.44 httpsd.prefork
29033 luke      25   0  121m 116m  500 S  0.0 11.5   0:00.03 httpsd.prefork
29034 luke      25   0  121m 116m  500 S  0.0 11.5   0:00.03 httpsd.prefork
29035 luke      25   0  121m 116m  500 S  0.0 11.5   0:00.03 httpsd.prefork
29036 luke      25   0  121m 116m  500 S  0.0 11.5   0:00.03 httpsd.prefork
29037 luke      25   0  121m 116m  500 S  0.0 11.5   0:00.03 httpsd.prefork

Comment: (Worker usage similar.) Interesting, this config does take up more than just bare vhosts, but nothing like what you see with ProxyPass directives in them.

Control data – stock config

I said this was unscientific, but for at least a semblance of procedure, let me include data for the stock config, with none of this proxy/vhost stuff defined at all.

Prefork startup time: 2 secs

Prefork “top” usage:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
30202 luke      18   0  7164 2160 1296 S  0.0  0.2   0:00.01 httpsd.prefork
30204 luke      25   0  7164 1392  500 S  0.0  0.1   0:00.00 httpsd.prefork
30205 luke      25   0  7164 1392  500 S  0.0  0.1   0:00.00 httpsd.prefork
30206 luke      25   0  7164 1392  500 S  0.0  0.1   0:00.00 httpsd.prefork
30207 luke      25   0  7164 1392  500 S  0.0  0.1   0:00.00 httpsd.prefork
30208 luke      25   0  7164 1392  500 S  0.0  0.1   0:00.00 httpsd.prefork

Worker startup time:  3 secs

Worker “top” usage:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
30031 luke      18   0  7356 2328 1460 S  0.0  0.2   0:00.01 httpsd.worker
30035 luke      25   0  277m 1952  948 S  0.0  0.2   0:00.00 httpsd.worker
30033 luke      21   0  277m 1948  944 S  0.0  0.2   0:00.00 httpsd.worker
30032 luke      21   0  7112 1400  496 S  0.0  0.1   0:00.00 httpsd.worker

Control data – stock config, empty vhosts

And here’s what happens when I just define empty vhosts, no proxy stuff.

Prefork startup time: 3 secs

Prefork “top” usage:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
30331 luke      18   0 97816  90m 1320 S  0.0  9.0   0:00.35 httpsd.prefork
30333 luke      25   0 97816  89m  500 S  0.0  8.9   0:00.02 httpsd.prefork
30334 luke      25   0 97816  89m  500 S  0.0  8.9   0:00.02 httpsd.prefork
30335 luke      25   0 97816  89m  500 S  0.0  8.9   0:00.02 httpsd.prefork
30336 luke      25   0 97816  89m  500 S  0.0  8.9   0:00.02 httpsd.prefork
30337 luke      25   0 97816  89m  500 S  0.0  8.9   0:00.02 httpsd.prefork

Worker startup time:  3 secs

Worker “top” usage:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
30382 luke      18   0 97992  90m 1492 S  0.0  9.0   0:00.30 httpsd.worker
30387 luke      25   0  365m  90m  948 S  0.0  8.9   0:00.02 httpsd.worker
30385 luke      25   0  365m  90m  944 S  0.0  8.9   0:00.02 httpsd.worker
30384 luke      25   0 97884  89m  508 S  0.0  8.9   0:00.00 httpsd.worker
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: