Forum OpenACS Development: Re: cache statistics
So, while it is possible to measure the amount of memory that is put into these stores, it is not possible (without significant C programming) in the general case to figure out, what this content consumes per store (e.g. storing one byte in a tcl variable in a namespace needs significantly more than one byte, since it needs the variable structure, variable name, tcl_obj, maybe a new namespace with its hash tables, ...). Furthermore, a major problem with the current aolserver memory allocator is that it is optimized for high concurrency, but it tends to fragment easily after some time, which increases the memory footprint and reduces the hardware cache efficiency.
How busy is your site with the 3GB memory footprint? Is it an opteron machine? what's your maxthreads and your threadtimeout? Memory consumption is directly related to the number of threads (connection threads and scheduled threads), since every thread keeps a private copy of the tcl procs (+ vars). In a typical dotlrn instance we have are about 8000 procs. So, if one has e.g. 50 threads (e.g. 40 connection threads + 10 scheduled), this makes 400.000 procs....
The site has minthread of 30 and maxthreads of 40, as the company has only 35 employees, though most of them work all the time.
Prozessor is a Xeon 2.8GhZ. Debian Sarge System, total memory 6GB, though I did not recompile the kernel yet to make Linux aware of the 2 additional Gb (it currently only sees 4GB).
I assume we have a couple more procs, as we have dotlrn plus contacts/ams/project-manager/invoices .... installed, so this could explain some things...
Did you find a way around this problem?
The fragmentation is probably also a reason as we do not run into serious out of memory problems (aka: writes to disk), especially as we restart the server every night, but the footprint is still high and I'm not sure if AOLserver is still efficient with this high a footprint.