Forum OpenACS Development: Re: Re: Re: Roadmap: How does Openacs Scale?

Collapse
Posted by Gustaf Neumann on
Rocael,

yes, in all test-cases db+nsd were running on the same box. For these test-cases, there we used only one nsd process with 40 connection threads.

on our production system (often more than 1200 concurrent users, with average response time of < 0.2 secods, about 1.5 mio oacs objects) we have the same setup, everything on one box. we got the p570 with 16 cores, every core has the equivalent to hyperthreading, therefore we have 32 cpus, which are split between 2 lpars (logical partions, like a vm, every lpar might have different operating systems, different cpus and devices assigned etc). we are running in 64-bit linux. the power5+ achitecture has a great memory bandwith and low latency, this seems the reason why it scales so well. This is a great hardware with the disadvantage that we are getting more lazy about performance tuning.

yes, we are still rebooting nsd daily. the unable to alloc messages are gone, since we changed to 64 bit. The size of nsd grows in our current setup to about 2GB. You see in the graph nsd growing from the reboot at 3:30am
http://media.wu-wien.ac.at/download/nsdsize.png
without reboot it reaches about twice the size on the second day, the growth seems highly related to the number of requests.

the tests of the benchmark: peter alberer did this. we monitored our usage pattern, developed from that a mix of of queries on the tcl api level (our inhouse developed online execises, forums, etc) configured these to run without or faked ad_conn info, defined a user-session constisting of these tasks, and run these tasks in multiple threads (using libthread). For this, we used a reasonably sized database (>1 mio oacs objects). The results are the figures in Learn-Bench.

It was not so easy to get access to these machines to run our tests. IBM and one of its local dealers were quite helpful, so we got more results about this family. we were not keen on itanium systems, since the price was high and the future unclear.

i can't and won't claim that our benchmarks are significant for anybody execpt us. we have a different load pattern and our installations differs in about 11.000 changes from current oacs-5-2. most probably, there are now much faster or cheaper machines out there, than we got access to about one year ago.

in what kind of information are you interested?