Home
The Toolkit for Online Communities
17598 Community Members, 0 members online, 4705 visitors today
Log In Register
OpenACS Home : Forums : .LRN Q&A : AOL performace data

Forum .LRN Q&A: AOL performace data

Icon of envelope Request notifications

Collapse
Posted by Mark Battersby on
Hi,

Can anyone provide AOL webserver performance data. I am trying to guestimate the server hardware required to support 100 URLs per second.

It would be useful to know if anyone has experience with blade servers. My idea is to use standard 2GHz processors with say 1GB memory. Assuming this can support 100 URLs per second, we simply add load balancing and another blade server for an additional 100 URLs per second.

Cheers

Mark Battersby
email: mark.battersby@cvision.cn

Collapse
2: Re: AOL performace data (response to 1)
Posted by Jun Yamog on
Hi Mark,

It will greatly depend on what those pages in those 100 URLs are doing.  If those are simple operations then maybe a single blade will do.  Employing caching also helps.

OpenACS does support the load balancing / clustering that you are looking for in adding capacity out of the box.  By adding more front end boxes you may run into that the db server will be the bottleneck.  So employing caching will be needed when you run into this.

Collapse
3: Re: AOL performace data (response to 1)
Posted by Andrew Piskorski on
Blade servers are not fundamentally any different than 1U rackmounts or any other form factor of server. Blades are smaller, sexier, and generally overpriced unless you really need their higher rack density.

Mark, you need a much more detailed idea about what it is you're application is trying to do. (And you didn't even mention disk IO, which for an RDBMS can be the limiting factor.) As Jun pointed out, serving 100 requests per second could be nearly trivial, or very difficult, it all depends on what those requests are doing. You posted this in the .LRN forum so perhaps you intend to run .LRN?

Modern CPUs, RAM, disks, and networks are so fast that many interesting sites can be run with quite modest hardware. But even much fancier hardware isn't all that expensive nowadays. For more towards the large high-volume end of things, see the Denis Roy's Jan. and Feb. posts with details on the hardware they use for aiesec.net.

I may be a bit out of date, but currently I'd guess that a rather high-end dual SMP system might have two 3+ GHz Intel Xeons (or AMD Athlon MPs or Opterons of similar or better performance), 8+ GB ECC RAM of commensurate speed, and depending on use, 4-12 or more fast disks in a RAID array (more disks is better). I'm not sure what that machine would cost currently but you're not talking tens of thousands of dollars here. Maybe there are some OpenACS sites out there big and complicated enough that a monster dual CPU box like that would be just too wimpy to run the RDBMS, but if there are I haven't personally heard of any of them.

Most sites probably never need hardware anywhere near that. (For example, openacs.org certainly runs on much more modest hardware!) And anyway, if you think you need more than that, my guess is you're probably either Yahoo or AOL or some big corporate entity like that with a massive public website, or (more likely) you have egregiously poorly performing software and just throwing more hardware at it probably isn't a good idea anyway.

Hardware gets faster all the time, but the design of your software is still, and probably always will be, the dominant factor in meeting your performance goals for a large and complex site. If nothing else, Jim Davidson's Feb. 2000 slides on the AOL Digital City site architecture should help show that.

Collapse
4: Re: AOL performace data (response to 1)
Posted by Jun Yamog on
Yes the software is very important.  From first hand experience this what happened to one of my current projects, not OpenACS.  The client bought a real hefty xeon 4gig ram server.  Since we estimated that this will need about 1gb of java and the other for oracle.  Its running about 100+ subsites.  When most data are in, we did some stress test.  Sadly each subsite seems to need 20mb.  So we will need about 2.5gb including caching and real code.  The natural option was to add hardware, but before we did that we studied the code.

After some test and theories and getting a 5 day trial key of Optimizeit.  We where able to pinpoint the problems.  With less than 50 lines of code changes.  We are now running in less than 200mb with caching... not only that we are about 10x faster than the old code.  Now we have a big fat server... so we just decided to turn up the cache :)

Sometimes it pays to get your hands dirty and Open Source is a real bonus.  You don't need to wait for the vendor to fix it.  Which also reminds me Lars new profiler will likely save someone's big bucks one of this days. :)