Forum OpenACS Q&A: Re: OpenACS clusters

Collapse
4: Re: OpenACS clusters (response to 1)
Posted by Eduardo Santos on
Hi Claudio,

I'm working with Clusters for some time, and the adjustments are not as simple as it can look like. I've had different problems, and you have to change some OpenACS configurations so it works properly. First, take a look at this thread, where I've runned a lot of tests: http://www.openacs.org/forums/message-view?message_id=1539800

I'll try to resume some of my learnings here, and feel free to ask any further questions.

1 - If you can avoid it DO NOT use Virtual Machines. Most of the performance problems in Clusters are related to thread creation/destruction proccess, and this is mostly an I/O operation. When you get to the point where I/O is a bottleneck, VM's are bad because the disks are shared between these VM's. If you can't avoid this, try to setup one single disk for each VM. If there's no way you can do this, well, you are going to have performance issues.

2 - Cofigure Kernel parameters to Cluster so util_memoize cache works. This config can be a little anoying, but it's totally necessary.

3 - Avoid NFS if you can. Just remember OpenACS won't startup if you lose some of the basic dirs, such as content-repository. The best solution I could find was to setup an exclusive communication interface between servers (like a cross cable) and have a separated content-repository partition to every server. From times to times I use an rsync to sync the files on both servers. If you have a storage or anything, it would be better.

These are the first things I can think about now. Let me know if you have any other questions.

Collapse
5: Re: OpenACS clusters (response to 4)
Posted by Claudio Pasolini on
Thank You for answering, Eduardo, and for pointing me to a very useful resource.

Unfortunately I can't avoid using Virtual Machines: this is a customer decision.

Actually my absolute priority is to make things work and only then worry about performance.

The first concern is about the cache management. I already configured the Cluster kernel parameters and set to '1' PermissionCacheP, but I wonder if this is enough or if I should apply some patch to my standard OpenACS 5.5.

The second concern is about the file-storage which is configured to store files into the file system. Actually the content-repository-content-files folder is shared by the application servers via NFS and I wonder if some conflict could arise.

I don't understand how I could setup 'an exclusive communication interface between servers': can you kindly elaborate more?

Collapse
6: Re: OpenACS clusters (response to 5)
Posted by Eduardo Santos on
Hi Claudio,

I know using VM's is usually a customer decision, but you will see some problems in the future. Just to let you know. :)

Take a look at this thread about caching: https://openacs.org/forums/message-view?message_id=393986

If you configure the parameters correctly, you should see no problems.

About the file-storage, there's no problem about storing it on files; most people do it. there should be no problem to share it with the servers, because the folders are controlled by PostgreSQL. As long as the system access the same database, you will be ok.

About the exclusive communication, I have a cross cable plugged on servers ethernet interface just to replicate content-repository folders. My experience is that I/O operations should be isolated from user access operations, so should be PostgreSQL access. There should be no large TCP/IP queue on this configuration.

Hope it helps

Collapse
7: Re: OpenACS clusters (response to 6)
Posted by Gustaf Neumann on
On of the major strengths of OpenACS + aolserver/naviserver is to exploit the power of multi-core and multi-processor architectures. AOLserver was pioneering in this regard and is still at least among the best in this category. Since multi-core architectures are already a commodity, and substantial multi-core systems (32+ cores) are becoming cheap (see Andrews posting above), these are actually good times for this architecture.

But certainly, there is as well the trend towards virtualization, and in most cases, a VM boils down to a machine with a single core (from the hosted OS point of view). Such a setup will not be be able to use the scalability strength of the server. The situation will not improve significantly by the cluster setup, which requires substantial inter-cluster communication, which is currently built on top of HTTP. Problem areas are the caches and nsvs. I am pretty sure, the cluster support can be significantly improved, but i am not sure, this is worth the effort. Virtualization on its own is not the problem, but virtual machines with single cores are bad, as well as missing transparency of the bottlenecks of the vm configuration (e.g. I/O) and missing knowledge about the influence of other processes running on the same hardware.

We are using in production as well a virtualized environment (p570, POWER6) but our virtual machine (called lpar in the IBM world) has for our production server 64 logical CPUs. This machine is not in the the price range of the Opteron machines that Andrew mentioned, but scales perfectly and it allows even to move running VMs from one physical hardware to another one if necessary (requires an appropriate storage architecture, etc.). After the fire last year we had to squeeze 10 of our servers to one smaller physical machine; this worked very well, even with the 1600 concurrently active users, we had at that time.

In short, for heavy load applications, i would not recommend to use a cluster of single-cpu VMs. Remember, there was a time when openacs.org was very slow and unreliable. This was at the time when it was running on a vm with a single CPU. Many of us (and the oct) were at that time not able to pinpoint and eliminate the problem. Interestingly enough, the problem was not there on a much older and slower hardware before, and disappeared, when the server moved to Vienna in June 2008.

Collapse
18: Re: OpenACS clusters (response to 7)
Posted by Mark Aufflick on
I agree with everyone about using hardware with AOLServer - it flies.

I am, though, in the progress of moving some small openacs sites from a (very old, and still fast) physical server to linode which is going pretty well so far.

I thought people might be interested to know that linode VMs are multicore:

mark@li165-84:~$ cat /proc/cpuinfo |egrep 'processor|model name'
processor   : 0
model name  : Intel(R) Xeon(R) CPU           L5520  @ 2.27GHz
processor   : 1
model name  : Intel(R) Xeon(R) CPU           L5520  @ 2.27GHz
processor   : 2
model name  : Intel(R) Xeon(R) CPU           L5520  @ 2.27GHz
processor   : 3
model name  : Intel(R) Xeon(R) CPU           L5520  @ 2.27GHz
Of course this is all shared, but with the right plan and assuming the right level of contention, you are getting benefits from the aolserver threading.