Forum OpenACS Q&A: Re: OpenACS clusters
I know using VM's is usually a customer decision, but you will see some problems in the future. Just to let you know. :)
Take a look at this thread about caching: https://openacs.org/forums/message-view?message_id=393986
If you configure the parameters correctly, you should see no problems.
About the file-storage, there's no problem about storing it on files; most people do it. there should be no problem to share it with the servers, because the folders are controlled by PostgreSQL. As long as the system access the same database, you will be ok.
About the exclusive communication, I have a cross cable plugged on servers ethernet interface just to replicate content-repository folders. My experience is that I/O operations should be isolated from user access operations, so should be PostgreSQL access. There should be no large TCP/IP queue on this configuration.
Hope it helps
But certainly, there is as well the trend towards virtualization, and in most cases, a VM boils down to a machine with a single core (from the hosted OS point of view). Such a setup will not be be able to use the scalability strength of the server. The situation will not improve significantly by the cluster setup, which requires substantial inter-cluster communication, which is currently built on top of HTTP. Problem areas are the caches and nsvs. I am pretty sure, the cluster support can be significantly improved, but i am not sure, this is worth the effort. Virtualization on its own is not the problem, but virtual machines with single cores are bad, as well as missing transparency of the bottlenecks of the vm configuration (e.g. I/O) and missing knowledge about the influence of other processes running on the same hardware.
We are using in production as well a virtualized environment (p570, POWER6) but our virtual machine (called lpar in the IBM world) has for our production server 64 logical CPUs. This machine is not in the the price range of the Opteron machines that Andrew mentioned, but scales perfectly and it allows even to move running VMs from one physical hardware to another one if necessary (requires an appropriate storage architecture, etc.). After the fire last year we had to squeeze 10 of our servers to one smaller physical machine; this worked very well, even with the 1600 concurrently active users, we had at that time.
In short, for heavy load applications, i would not recommend to use a cluster of single-cpu VMs. Remember, there was a time when openacs.org was very slow and unreliable. This was at the time when it was running on a vm with a single CPU. Many of us (and the oct) were at that time not able to pinpoint and eliminate the problem. Interestingly enough, the problem was not there on a much older and slower hardware before, and disappeared, when the server moved to Vienna in June 2008.
I am, though, in the progress of moving some small openacs sites from a (very old, and still fast) physical server to linode which is going pretty well so far.
I thought people might be interested to know that linode VMs are multicore:
mark@li165-84:~$ cat /proc/cpuinfo |egrep 'processor|model name' processor : 0 model name : Intel(R) Xeon(R) CPU L5520 @ 2.27GHz processor : 1 model name : Intel(R) Xeon(R) CPU L5520 @ 2.27GHz processor : 2 model name : Intel(R) Xeon(R) CPU L5520 @ 2.27GHz processor : 3 model name : Intel(R) Xeon(R) CPU L5520 @ 2.27GHzOf course this is all shared, but with the right plan and assuming the right level of contention, you are getting benefits from the aolserver threading.